Sample records for rigorous error analysis

  1. Rigorous Science: a How-To Guide.

    PubMed

    Casadevall, Arturo; Fang, Ferric C

    2016-11-08

    Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word "rigor" is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education. Copyright © 2016 Casadevall and Fang.

  2. Rigorous Science: a How-To Guide

    PubMed Central

    Fang, Ferric C.

    2016-01-01

    ABSTRACT Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word “rigor” is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education. PMID:27834205

  3. Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis

    NASA Technical Reports Server (NTRS)

    Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.

    2017-01-01

    This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.

  4. Space-Borne Laser Altimeter Geolocation Error Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Fang, J.; Ai, Y.

    2018-05-01

    This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  5. Near Identifiability of Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1987-01-01

    Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.

  6. Error analysis in inverse scatterometry. I. Modeling.

    PubMed

    Al-Assaad, Rayan M; Byrne, Dale M

    2007-02-01

    Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.

  7. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws.

    PubMed

    Xiao, Xiao; White, Ethan P; Hooten, Mevin B; Durham, Susan L

    2011-10-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain.

  8. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  9. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE PAGES

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...

    2017-02-15

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  10. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    PubMed Central

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter

    2017-01-01

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466

  11. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  12. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons

    PubMed Central

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2012-01-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π. PMID:24027379

  13. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  14. Human error mitigation initiative (HEMI) : summary report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operationsmore » indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.« less

  15. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws

    USGS Publications Warehouse

    Xiao, X.; White, E.P.; Hooten, M.B.; Durham, S.L.

    2011-01-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain. ?? 2011 by the Ecological Society of America.

  16. Performance Analysis of Local Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tong, Xin T.

    2018-03-01

    Ensemble Kalman filter (EnKF) is an important data assimilation method for high-dimensional geophysical systems. Efficient implementation of EnKF in practice often involves the localization technique, which updates each component using only information within a local radius. This paper rigorously analyzes the local EnKF (LEnKF) for linear systems and shows that the filter error can be dominated by the ensemble covariance, as long as (1) the sample size exceeds the logarithmic of state dimension and a constant that depends only on the local radius; (2) the forecast covariance matrix admits a stable localized structure. In particular, this indicates that with small system and observation noises, the filter error will be accurate in long time even if the initialization is not. The analysis also reveals an intrinsic inconsistency caused by the localization technique, and a stable localized structure is necessary to control this inconsistency. While this structure is usually taken for granted for the operation of LEnKF, it can also be rigorously proved for linear systems with sparse local observations and weak local interactions. These theoretical results are also validated by numerical implementation of LEnKF on a simple stochastic turbulence in two dynamical regimes.

  17. Accurate force field for molybdenum by machine learning large materials data

    NASA Astrophysics Data System (ADS)

    Chen, Chi; Deng, Zhi; Tran, Richard; Tang, Hanmei; Chu, Iek-Heng; Ong, Shyue Ping

    2017-09-01

    In this work, we present a highly accurate spectral neighbor analysis potential (SNAP) model for molybdenum (Mo) developed through the rigorous application of machine learning techniques on large materials data sets. Despite Mo's importance as a structural metal, existing force fields for Mo based on the embedded atom and modified embedded atom methods do not provide satisfactory accuracy on many properties. We will show that by fitting to the energies, forces, and stress tensors of a large density functional theory (DFT)-computed dataset on a diverse set of Mo structures, a Mo SNAP model can be developed that achieves close to DFT accuracy in the prediction of a broad range of properties, including elastic constants, melting point, phonon spectra, surface energies, grain boundary energies, etc. We will outline a systematic model development process, which includes a rigorous approach to structural selection based on principal component analysis, as well as a differential evolution algorithm for optimizing the hyperparameters in the model fitting so that both the model error and the property prediction error can be simultaneously lowered. We expect that this newly developed Mo SNAP model will find broad applications in large and long-time scale simulations.

  18. Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Smoller, Joel

    2010-09-01

    A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.

  19. Rigorous derivation of the effective model describing a non-isothermal fluid flow in a vertical pipe filled with porous medium

    NASA Astrophysics Data System (ADS)

    Beneš, Michal; Pažanin, Igor

    2018-03-01

    This paper reports an analytical investigation of non-isothermal fluid flow in a thin (or long) vertical pipe filled with porous medium via asymptotic analysis. We assume that the fluid inside the pipe is cooled (or heated) by the surrounding medium and that the flow is governed by the prescribed pressure drop between pipe's ends. Starting from the dimensionless Darcy-Brinkman-Boussinesq system, we formally derive a macroscopic model describing the effective flow at small Brinkman-Darcy number. The asymptotic approximation is given by the explicit formulae for the velocity, pressure and temperature clearly acknowledging the effects of the cooling (heating) and porous structure. The theoretical error analysis is carried out to indicate the order of accuracy and to provide a rigorous justification of the effective model.

  20. Mass-balance measurements in Alaska and suggestions for simplified observation programs

    USGS Publications Warehouse

    Trabant, D.C.; March, R.S.

    1999-01-01

    US Geological Survey glacier fieldwork in Alaska includes repetitious measurements, corrections for leaning or bending stakes, an ability to reliably measure seasonal snow as deep as 10 m, absolute identification of summer surfaces in the accumulation area, and annual evaluation of internal accumulation, internal ablation, and glacier-thickness changes. Prescribed field measurement and note-taking techniques help eliminate field errors and expedite the interpretative process. In the office, field notes are transferred to computerized spread-sheets for analysis, release on the World Wide Web, and archival storage. The spreadsheets have error traps to help eliminate note-taking and transcription errors. Rigorous error analysis ends when mass-balance measurements are extrapolated and integrated with area to determine glacier and basin mass balances. Unassessable errors in the glacier and basin mass-balance data reduce the value of the data set for correlations with climate change indices. The minimum glacier mass-balance program has at least three measurement sites on a glacier and the measurements must include the seasonal components of mass balance as well as the annual balance.

  1. [The surgeon and deontology].

    PubMed

    Sucila, Antanas

    2002-01-01

    The aim of study is to recall surgeons deontological principles and errors. The article demonstrates some specific deontological errors, performed by surgeon on patients and his colleagues; points out painful sequela of these errors as well. CONCLUSION. The surgeon should take in account deontological principles rigorously in routine daily practice.

  2. Spatiotemporal Filtering Using Principal Component Analysis and Karhunen-Loeve Expansion Approaches for Regional GPS Network Analysis

    NASA Technical Reports Server (NTRS)

    Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.

    2006-01-01

    Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.

  3. Asteroid orbital error analysis: Theory and application

    NASA Technical Reports Server (NTRS)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  4. Identification of dynamic systems, theory and formulation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1985-01-01

    The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.

  5. Analysis of host response to bacterial infection using error model based gene expression microarray experiments

    PubMed Central

    Stekel, Dov J.; Sarti, Donatella; Trevino, Victor; Zhang, Lihong; Salmon, Mike; Buckley, Chris D.; Stevens, Mark; Pallen, Mark J.; Penn, Charles; Falciani, Francesco

    2005-01-01

    A key step in the analysis of microarray data is the selection of genes that are differentially expressed. Ideally, such experiments should be properly replicated in order to infer both technical and biological variability, and the data should be subjected to rigorous hypothesis tests to identify the differentially expressed genes. However, in microarray experiments involving the analysis of very large numbers of biological samples, replication is not always practical. Therefore, there is a need for a method to select differentially expressed genes in a rational way from insufficiently replicated data. In this paper, we describe a simple method that uses bootstrapping to generate an error model from a replicated pilot study that can be used to identify differentially expressed genes in subsequent large-scale studies on the same platform, but in which there may be no replicated arrays. The method builds a stratified error model that includes array-to-array variability, feature-to-feature variability and the dependence of error on signal intensity. We apply this model to the characterization of the host response in a model of bacterial infection of human intestinal epithelial cells. We demonstrate the effectiveness of error model based microarray experiments and propose this as a general strategy for a microarray-based screening of large collections of biological samples. PMID:15800204

  6. The Accuracy of Aggregate Student Growth Percentiles as Indicators of Educator Performance

    ERIC Educational Resources Information Center

    Castellano, Katherine E.; McCaffrey, Daniel F.

    2017-01-01

    Mean or median student growth percentiles (MGPs) are a popular measure of educator performance, but they lack rigorous evaluation. This study investigates the error in MGP due to test score measurement error (ME). Using analytic derivations, we find that errors in the commonly used MGP are correlated with average prior latent achievement: Teachers…

  7. A mass-energy preserving Galerkin FEM for the coupled nonlinear fractional Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Zhang, Guoyu; Huang, Chengming; Li, Meng

    2018-04-01

    We consider the numerical simulation of the coupled nonlinear space fractional Schrödinger equations. Based on the Galerkin finite element method in space and the Crank-Nicolson (CN) difference method in time, a fully discrete scheme is constructed. Firstly, we focus on a rigorous analysis of conservation laws for the discrete system. The definitions of discrete mass and energy here correspond with the original ones in physics. Then, we prove that the fully discrete system is uniquely solvable. Moreover, we consider the unconditionally convergent properties (that is to say, we complete the error estimates without any mesh ratio restriction). We derive L2-norm error estimates for the nonlinear equations and L^{∞}-norm error estimates for the linear equations. Finally, some numerical experiments are included showing results in agreement with the theoretical predictions.

  8. Navigating towards improved surgical safety using aviation-based strategies.

    PubMed

    Kao, Lillian S; Thomas, Eric J

    2008-04-01

    Safety practices in the aviation industry are being increasingly adapted to healthcare in an effort to reduce medical errors and patient harm. However, caution should be applied in embracing these practices because of limited experience in surgical disciplines, lack of rigorous research linking these practices to outcome, and fundamental differences between the two industries. Surgeons should have an in-depth understanding of the principles and data supporting aviation-based safety strategies before routinely adopting them. This paper serves as a review of strategies adapted to improve surgical safety, including the following: implementation of crew resource management in training operative teams; incorporation of simulation in training of technical and nontechnical skills; and analysis of contributory factors to errors using surveys, behavioral marker systems, human factors analysis, and incident reporting. Avenues and challenges for future research are also discussed.

  9. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  10. Simultaneous overlay and CD measurement for double patterning: scatterometry and RCWA approach

    NASA Astrophysics Data System (ADS)

    Li, Jie; Liu, Zhuan; Rabello, Silvio; Dasari, Prasad; Kritsun, Oleg; Volkman, Catherine; Park, Jungchul; Singh, Lovejeet

    2009-03-01

    As optical lithography advances to 32 nm technology node and beyond, double patterning technology (DPT) has emerged as an attractive solution to circumvent the fundamental optical limitations. DPT poses unique demands on critical dimension (CD) uniformity and overlay control, making the tolerance decrease much faster than the rate at which critical dimension shrinks. This, in turn, makes metrology even more challenging. In the past, multi-pad diffractionbased overlay (DBO) using empirical approach has been shown to be an effective approach to measure overlay error associated with double patterning [1]. In this method, registration errors for double patterning were extracted from specially designed diffraction targets (three or four pads for each direction); CD variation is assumed negligible within each group of adjacent pads and not addressed in the measurement. In another paper, encouraging results were reported with a first attempt at simultaneously extracting overlay and CD parameters using scatterometry [2]. In this work, we apply scatterometry with a rigorous coupled wave analysis (RCWA) approach to characterize two double-patterning processes: litho-etch-litho-etch (LELE) and litho-freeze-litho-etch (LFLE). The advantage of performing rigorous modeling is to reduce the number of pads within each measurement target, thus reducing space requirement and improving throughput, and simultaneously extract CD and overlay information. This method measures overlay errors and CDs by fitting the optical signals with spectra calculated from a model of the targets. Good correlation is obtained between the results from this method and that of several reference techniques, including empirical multi-pad DBO, CD-SEM, and IBO. We also perform total measurement uncertainty (TMU) analysis to evaluate the overall performance. We demonstrate that scatterometry provides a promising solution to meet the challenging overlay metrology requirement in DPT.

  11. Testing Intelligently Includes Double-Checking Wechsler IQ Scores

    ERIC Educational Resources Information Center

    Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas

    2011-01-01

    The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…

  12. An analysis of estimation of pulmonary blood flow by the single-breath method

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.

    1986-01-01

    The single-breath method represents a simple noninvasive technique for the assessment of capillary blood flow across the lung. However, this method has not gained widespread acceptance, because its accuracy is still being questioned. A rigorous procedure is described for estimating pulmonary blood flow (PBF) using data obtained with the aid of the single-breath method. Attention is given to the minimization of data-processing errors in the presence of measurement errors and to questions regarding a correction for possible loss of CO2 in the lung tissue. It is pointed out that the estimations are based on the exact solution of the underlying differential equations which describe the dynamics of gas exchange in the lung. The reported study demonstrates the feasibility of obtaining highly reliable estimates of PBF from expiratory data in the presence of random measurement errors.

  13. Terminal iterative learning control based station stop control of a train

    NASA Astrophysics Data System (ADS)

    Hou, Zhongsheng; Wang, Yi; Yin, Chenkun; Tang, Tao

    2011-07-01

    The terminal iterative learning control (TILC) method is introduced for the first time into the field of train station stop control and three TILC-based algorithms are proposed in this study. The TILC-based train station stop control approach utilises the terminal stop position error in previous braking process to update the current control profile. The initial braking position, or the braking force, or their combination is chosen as the control input, and corresponding learning law is developed. The terminal stop position error of each algorithm is guaranteed to converge to a small region related with the initial offset of braking position with rigorous analysis. The validity of the proposed algorithms is verified by illustrative numerical examples.

  14. [Relations between health information systems and patient safety].

    PubMed

    Nøhr, Christian

    2012-11-05

    Health information systems have the potential to reduce medical errors, and indeed many studies have shown a significant reduction. However, if the systems are not designed and implemented properly, there is evidence that suggest that new types of errors will arise--i.e., technology-induced errors. Health information systems will need to undergo a more rigorous evaluation. Usability evaluation and simulation test with humans in the loop can help to detect and prevent technology-induced errors before they are deployed in real health-care settings.

  15. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment

    NASA Astrophysics Data System (ADS)

    James, Mike R.; Robson, Stuart; d'Oleire-Oltmanns, Sebastian; Niethammer, Uwe

    2016-04-01

    Structure-from-motion (SfM) algorithms are greatly facilitating the production of detailed topographic models based on images collected by unmanned aerial vehicles (UAVs). However, SfM-based software does not generally provide the rigorous photogrammetric analysis required to fully understand survey quality. Consequently, error related to problems in control point data or the distribution of control points can remain undiscovered. Even if these errors are not large in magnitude, they can be systematic, and thus have strong implications for the use of products such as digital elevation models (DEMs) and orthophotos. Here, we develop a Monte Carlo approach to (1) improve the accuracy of products when SfM-based processing is used and (2) reduce the associated field effort by identifying suitable lower density deployments of ground control points. The method highlights over-parameterisation during camera self-calibration and provides enhanced insight into control point performance when rigorous error metrics are not available. Processing was implemented using commonly-used SfM-based software (Agisoft PhotoScan), which we augment with semi-automated and automated GCPs image measurement. We apply the Monte Carlo method to two contrasting case studies - an erosion gully survey (Taurodont, Morocco) carried out with an fixed-wing UAV, and an active landslide survey (Super-Sauze, France), acquired using a manually controlled quadcopter. The results highlight the differences in the control requirements for the two sites, and we explore the implications for future surveys. We illustrate DEM sensitivity to critical processing parameters and show how the use of appropriate parameter values increases DEM repeatability and reduces the spatial variability of error due to processing artefacts.

  16. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.

  17. Systems engineering analysis of five 'as-manufactured' SXI telescopes

    NASA Astrophysics Data System (ADS)

    Harvey, James E.; Atanassova, Martina; Krywonos, Andrey

    2005-09-01

    Four flight models and a spare of the Solar X-ray Imager (SXI) telescope mirrors have been fabricated. The first of these is scheduled to be launched on the NOAA GOES- N satellite on July 29, 2005. A complete systems engineering analysis of the "as-manufactured" telescope mirrors has been performed that includes diffraction effects, residual design errors (aberrations), surface scatter effects, and all of the miscellaneous errors in the mirror manufacturer's error budget tree. Finally, a rigorous analysis of mosaic detector effects has been included. SXI is a staring telescope providing full solar disc images at X-ray wavelengths. For wide-field applications such as this, a field-weighted-average measure of resolution has been modeled. Our performance predictions have allowed us to use metrology data to model the "as-manufactured" performance of the X-ray telescopes and to adjust the final focal plane location to optimize the number of spatial resolution elements in a given operational field-of-view (OFOV) for either the aerial image or the detected image. The resulting performance predictions from five separate mirrors allow us to evaluate and quantify the optical fabrication process for producing these very challenging grazing incidence X-ray optics.

  18. Predictability Experiments With the Navy Operational Global Atmospheric Prediction System

    NASA Astrophysics Data System (ADS)

    Reynolds, C. A.; Gelaro, R.; Rosmond, T. E.

    2003-12-01

    There are several areas of research in numerical weather prediction and atmospheric predictability, such as targeted observations and ensemble perturbation generation, where it is desirable to combine information about the uncertainty of the initial state with information about potential rapid perturbation growth. Singular vectors (SVs) provide a framework to accomplish this task in a mathematically rigorous and computationally feasible manner. In this study, SVs are calculated using the tangent and adjoint models of the Navy Operational Global Atmospheric Prediction System (NOGAPS). The analysis error variance information produced by the NRL Atmospheric Variational Data Assimilation System is used as the initial-time SV norm. These VAR SVs are compared to SVs for which total energy is both the initial and final time norms (TE SVs). The incorporation of analysis error variance information has a significant impact on the structure and location of the SVs. This in turn has a significant impact on targeted observing applications. The utility and implications of such experiments in assessing the analysis error variance estimates will be explored. Computing support has been provided by the Department of Defense High Performance Computing Center at the Naval Oceanographic Office Major Shared Resource Center at Stennis, Mississippi.

  19. Response function of modulated grid Faraday cup plasma instruments

    NASA Technical Reports Server (NTRS)

    Barnett, A.; Olbert, S.

    1986-01-01

    Modulated grid Faraday cup plasma analyzers are a very useful tool for making in situ measurements of space plasmas. One of their great attributes is that their simplicity permits their angular response function to be calculated theoretically. An expression is derived for this response function by computing the trajectories of the charged particles inside the cup. The Voyager plasma science experiment is used as a specific example. Two approximations to the rigorous response function useful for data analysis are discussed. Multisensor analysis of solar wind data indicates that the formulas represent the true cup response function for all angles of incidence with a maximum error of only a few percent.

  20. Iterative Monte Carlo analysis of spin-dependent parton distributions

    DOE PAGES

    Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...

    2016-04-05

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less

  1. International Space Station Remote Sensing Pointing Analysis

    NASA Technical Reports Server (NTRS)

    Jacobson, Craig A.

    2007-01-01

    This paper analyzes the geometric and disturbance aspects of utilizing the International Space Station for remote sensing of earth targets. The proposed instrument (in prototype development) is SHORE (Station High-Performance Ocean Research Experiment), a multiband optical spectrometer with 15 m pixel resolution. The analysis investigates the contribution of the error effects to the quality of data collected by the instrument. This analysis supported the preliminary studies to determine feasibility of utilizing the International Space Station as an observing platform for a SHORE type of instrument. Rigorous analyses will be performed if a SHORE flight program is initiated. The analysis begins with the discussion of the coordinate systems involved and then conversion from the target coordinate system to the instrument coordinate system. Next the geometry of remote observations from the Space Station is investigated including the effects of the instrument location in Space Station and the effects of the line of sight to the target. The disturbance and error environment on Space Station is discussed covering factors contributing to drift and jitter, accuracy of pointing data and target and instrument accuracies.

  2. Economic measurement of medical errors using a hospital claims database.

    PubMed

    David, Guy; Gunnarsson, Candace L; Waters, Heidi C; Horblyuk, Ruslan; Kaplan, Harold S

    2013-01-01

    The primary objective of this study was to estimate the occurrence and costs of medical errors from the hospital perspective. Methods from a recent actuarial study of medical errors were used to identify medical injuries. A visit qualified as an injury visit if at least 1 of 97 injury groupings occurred at that visit, and the percentage of injuries caused by medical error was estimated. Visits with more than four injuries were removed from the population to avoid overestimation of cost. Population estimates were extrapolated from the Premier hospital database to all US acute care hospitals. There were an estimated 161,655 medical errors in 2008 and 170,201 medical errors in 2009. Extrapolated to the entire US population, there were more than 4 million unique injury visits containing more than 1 million unique medical errors each year. This analysis estimated that the total annual cost of measurable medical errors in the United States was $985 million in 2008 and just over $1 billion in 2009. The median cost per error to hospitals was $892 for 2008 and rose to $939 in 2009. Nearly one third of all medical injuries were due to error in each year. Medical errors directly impact patient outcomes and hospitals' profitability, especially since 2008 when Medicare stopped reimbursing hospitals for care related to certain preventable medical errors. Hospitals must rigorously analyze causes of medical errors and implement comprehensive preventative programs to reduce their occurrence as the financial burden of medical errors shifts to hospitals. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. Comparison of photogrammetric and astrometric data reduction results for the wild BC-4 camera

    NASA Technical Reports Server (NTRS)

    Hornbarger, D. H.; Mueller, I., I.

    1971-01-01

    The results of astrometric and photogrammetric plate reduction techniques for a short focal length camera are compared. Several astrometric models are tested on entire and limited plate areas to analyze their ability to remove systematic errors from interpolated satellite directions using a rigorous photogrammetric reduction as a standard. Residual plots are employed to graphically illustrate the analysis. Conclusions are made as to what conditions will permit the astrometric reduction to achieve comparable accuracies to those of photogrammetric reduction when applied for short focal length ballistic cameras.

  4. Implementation errors in the GingerALE Software: Description and recommendations.

    PubMed

    Eickhoff, Simon B; Laird, Angela R; Fox, P Mickle; Lancaster, Jack L; Fox, Peter T

    2017-01-01

    Neuroscience imaging is a burgeoning, highly sophisticated field the growth of which has been fostered by grant-funded, freely distributed software libraries that perform voxel-wise analyses in anatomically standardized three-dimensional space on multi-subject, whole-brain, primary datasets. Despite the ongoing advances made using these non-commercial computational tools, the replicability of individual studies is an acknowledged limitation. Coordinate-based meta-analysis offers a practical solution to this limitation and, consequently, plays an important role in filtering and consolidating the enormous corpus of functional and structural neuroimaging results reported in the peer-reviewed literature. In both primary data and meta-analytic neuroimaging analyses, correction for multiple comparisons is a complex but critical step for ensuring statistical rigor. Reports of errors in multiple-comparison corrections in primary-data analyses have recently appeared. Here, we report two such errors in GingerALE, a widely used, US National Institutes of Health (NIH)-funded, freely distributed software package for coordinate-based meta-analysis. These errors have given rise to published reports with more liberal statistical inferences than were specified by the authors. The intent of this technical report is threefold. First, we inform authors who used GingerALE of these errors so that they can take appropriate actions including re-analyses and corrective publications. Second, we seek to exemplify and promote an open approach to error management. Third, we discuss the implications of these and similar errors in a scientific environment dependent on third-party software. Hum Brain Mapp 38:7-11, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  6. Gendered innovations: a new approach for nursing science.

    PubMed

    Sims, Stacy T; Stefanick, Marcia L; Kronenberg, Fredi; Sachedina, Nishma A; Schiebinger, Londa

    2010-10-01

    Considerable sex and gender bias has been recognized within the field of medicine. Investigators have used sex and gender analysis to reevaluate studies and outcomes and generate new perspectives and new questions regarding differential diagnoses and treatments of men and women. Sex and gender analysis acts as an experimental control to provide critical scientific rigor; researchers who ignore it risk ignoring a possible source of error in past, current, and future science. In this article, the authors introduce some tools of sex and gender analysis and illustrate the concept of gendered innovations by demonstrating through examples how this type of analysis has profoundly enhanced human knowledge in health and disease. The authors also provide recommendations for incorporating the concepts of sex and gender analysis into nursing education and research.

  7. Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.

    PubMed

    Li, Jielin; Hassebrook, Laurence G; Guan, Chun

    2003-01-01

    Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.

  8. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  9. Coarse-grained stochastic processes and kinetic Monte Carlo simulators for the diffusion of interacting particles

    NASA Astrophysics Data System (ADS)

    Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2003-11-01

    We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.

  10. Fish-Eye Observing with Phased Array Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Wijnholds, S. J.

    The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.

  11. A complete representation of uncertainties in layer-counted paleoclimatic archives

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2017-09-01

    Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records - such as ice cores, sediments, corals, or tree rings - as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5-52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.

  12. Shear Recovery Accuracy in Weak-Lensing Analysis with the Elliptical Gauss-Laguerre Method

    NASA Astrophysics Data System (ADS)

    Nakajima, Reiko; Bernstein, Gary

    2007-04-01

    We implement the elliptical Gauss-Laguerre (EGL) galaxy-shape measurement method proposed by Bernstein & Jarvis and quantify the shear recovery accuracy in weak-lensing analysis. This method uses a deconvolution fitting scheme to remove the effects of the point-spread function (PSF). The test simulates >107 noisy galaxy images convolved with anisotropic PSFs and attempts to recover an input shear. The tests are designed to be immune to statistical (random) distributions of shapes, selection biases, and crowding, in order to test more rigorously the effects of detection significance (signal-to-noise ratio [S/N]), PSF, and galaxy resolution. The systematic error in shear recovery is divided into two classes, calibration (multiplicative) and additive, with the latter arising from PSF anisotropy. At S/N > 50, the deconvolution method measures the galaxy shape and input shear to ~1% multiplicative accuracy and suppresses >99% of the PSF anisotropy. These systematic errors increase to ~4% for the worst conditions, with poorly resolved galaxies at S/N simeq 20. The EGL weak-lensing analysis has the best demonstrated accuracy to date, sufficient for the next generation of weak-lensing surveys.

  13. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. follows rigorous standards of quality and accountability. A.D.A.M. is among the first to achieve this important distinction for online health information and services. Learn more about A.D.A.M.'s editorial ...

  14. Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging

    NASA Astrophysics Data System (ADS)

    Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand

    2018-01-01

    The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.

  15. An Interlaboratory Comparison of Dosimetry for a Multi-institutional Radiobiological

    PubMed Central

    Seed, TM; Xiao, S; Manley, N; Nikolich-Zugich, J; Pugh, J; van den Brink, M; Hirabayashi, Y; Yasutomo, K; Iwama, A; Koyasu, S; Shterev, I; Sempowski, G; Macchiarini, F; Nakachi, K; Kunugi, KC; Hammer, CG; DeWerd, LA

    2016-01-01

    Purpose An interlaboratory comparison of radiation dosimetry was conducted to determine the accuracy of doses being used experimentally for animal exposures within a large multi-institutional research project. The background and approach to this effort are described and discussed in terms of basic findings, problems and solutions. Methods Dosimetry tests were carried out utilizing optically stimulated luminescence (OSL) dosimeters embedded midline into mouse carcasses and thermal luminescence dosimeters (TLD) embedded midline into acrylic phantoms. Results The effort demonstrated that the majority (4/7) of the laboratories was able to deliver sufficiently accurate exposures having maximum dosing errors of ≤ 5%. Comparable rates of ‘dosimetric compliance’ were noted between OSL- and TLD-based tests. Data analysis showed a highly linear relationship between ‘measured’ and ‘target’ doses, with errors falling largely between 0–20%. Outliers were most notable for OSL-based tests, while multiple tests by ‘non-compliant’ laboratories using orthovoltage x-rays contributed heavily to the wide variation in dosing errors. Conclusions For the dosimetrically non-compliant laboratories, the relatively high rates of dosing errors were problematic, potentially compromising the quality of ongoing radiobiological research. This dosimetry effort proved to be instructive in establishing rigorous reviews of basic dosimetry protocols ensuring that dosing errors were minimized. PMID:26857121

  16. An interlaboratory comparison of dosimetry for a multi-institutional radiobiological research project: Observations, problems, solutions and lessons learned.

    PubMed

    Seed, Thomas M; Xiao, Shiyun; Manley, Nancy; Nikolich-Zugich, Janko; Pugh, Jason; Van den Brink, Marcel; Hirabayashi, Yoko; Yasutomo, Koji; Iwama, Atsushi; Koyasu, Shigeo; Shterev, Ivo; Sempowski, Gregory; Macchiarini, Francesca; Nakachi, Kei; Kunugi, Keith C; Hammer, Clifford G; Dewerd, Lawrence A

    2016-01-01

    An interlaboratory comparison of radiation dosimetry was conducted to determine the accuracy of doses being used experimentally for animal exposures within a large multi-institutional research project. The background and approach to this effort are described and discussed in terms of basic findings, problems and solutions. Dosimetry tests were carried out utilizing optically stimulated luminescence (OSL) dosimeters embedded midline into mouse carcasses and thermal luminescence dosimeters (TLD) embedded midline into acrylic phantoms. The effort demonstrated that the majority (4/7) of the laboratories was able to deliver sufficiently accurate exposures having maximum dosing errors of ≤5%. Comparable rates of 'dosimetric compliance' were noted between OSL- and TLD-based tests. Data analysis showed a highly linear relationship between 'measured' and 'target' doses, with errors falling largely between 0 and 20%. Outliers were most notable for OSL-based tests, while multiple tests by 'non-compliant' laboratories using orthovoltage X-rays contributed heavily to the wide variation in dosing errors. For the dosimetrically non-compliant laboratories, the relatively high rates of dosing errors were problematic, potentially compromising the quality of ongoing radiobiological research. This dosimetry effort proved to be instructive in establishing rigorous reviews of basic dosimetry protocols ensuring that dosing errors were minimized.

  17. Aerial photography flight quality assessment with GPS/INS and DEM data

    NASA Astrophysics Data System (ADS)

    Zhao, Haitao; Zhang, Bing; Shang, Jiali; Liu, Jiangui; Li, Dong; Chen, Yanyan; Zuo, Zhengli; Chen, Zhengchao

    2018-01-01

    The flight altitude, ground coverage, photo overlap, and other acquisition specifications of an aerial photography flight mission directly affect the quality and accuracy of the subsequent mapping tasks. To ensure smooth post-flight data processing and fulfill the pre-defined mapping accuracy, flight quality assessments should be carried out in time. This paper presents a novel and rigorous approach for flight quality evaluation of frame cameras with GPS/INS data and DEM, using geometric calculation rather than image analysis as in the conventional methods. This new approach is based mainly on the collinearity equations, in which the accuracy of a set of flight quality indicators is derived through a rigorous error propagation model and validated with scenario data. Theoretical analysis and practical flight test of an aerial photography mission using an UltraCamXp camera showed that the calculated photo overlap is accurate enough for flight quality assessment of 5 cm ground sample distance image, using the SRTMGL3 DEM and the POSAV510 GPS/INS data. An even better overlap accuracy could be achieved for coarser-resolution aerial photography. With this new approach, the flight quality evaluation can be conducted on site right after landing, providing accurate and timely information for decision making.

  18. Quadratic Zeeman effect in hydrogen Rydberg states: Rigorous bound-state error estimates in the weak-field regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falsaperla, P.; Fonte, G.

    1993-05-01

    Applying a method based on some results due to Kato [Proc. Phys. Soc. Jpn. 4, 334 (1949)], we show that series of Rydberg eigenvalues and Rydberg eigenfunctions of hydrogen in a uniform magnetic field can be calculated with a rigorous error estimate. The efficiency of the method decreases as the eigenvalue density increases and as [gamma][ital n][sup 3][r arrow]1, where [gamma] is the magnetic-field strength in units of 2.35[times]10[sup 9] G and [ital n] is the principal quantum number of the unperturbed hydrogenic manifold from which the diamagnetic Rydberg states evolve. Fixing [gamma] at the laboratory value 2[times]10[sup [minus]5] andmore » confining our calculations to the region [gamma][ital n][sup 3][lt]1 (weak-field regime), we obtain extremely accurate results up to states corresponding to the [ital n]=32 manifold.« less

  19. Errors induced by the neglect of polarization in radiance calculations for Rayleigh-scattering atmospheres

    NASA Technical Reports Server (NTRS)

    Mishchenko, M. I.; Lacis, A. A.; Travis, L. D.

    1994-01-01

    Although neglecting polarization and replacing the rigorous vector radiative transfer equation by its approximate scalar counterpart has no physical background, it is a widely used simplification when the incident light is unpolarized and only the intensity of the reflected light is to be computed. We employ accurate vector and scalar multiple-scattering calculations to perform a systematic study of the errors induced by the neglect of polarization in radiance calculations for a homogeneous, plane-parallel Rayleigh-scattering atmosphere (with and without depolarization) above a Lambertian surface. Specifically, we calculate percent errors in the reflected intensity for various directions of light incidence and reflection, optical thicknesses of the atmosphere, single-scattering albedos, depolarization factors, and surface albedos. The numerical data displayed can be used to decide whether or not the scalar approximation may be employed depending on the parameters of the problem. We show that the errors decrease with increasing depolarization factor and/or increasing surface albedo. For conservative or nearly conservative scattering and small surface albedos, the errors are maximum at optical thicknesses of about 1. The calculated errors may be too large for some practical applications, and, therefore, rigorous vector calculations should be employed whenever possible. However, if approximate scalar calculations are used, we recommend to avoid geometries involving phase angles equal or close to 0 deg and 90 deg, where the errors are especially significant. We propose a theoretical explanation of the large vector/scalar differences in the case of Rayleigh scattering. According to this explanation, the differences are caused by the particular structure of the Rayleigh scattering matrix and come from lower-order (except first-order) light scattering paths involving right scattering angles and right-angle rotations of the scattering plane.

  20. Optimizing Hybrid Metrology: Rigorous Implementation of Bayesian and Combined Regression.

    PubMed

    Henn, Mark-Alexander; Silver, Richard M; Villarrubia, John S; Zhang, Nien Fan; Zhou, Hui; Barnes, Bryan M; Ming, Bin; Vladár, András E

    2015-01-01

    Hybrid metrology, e.g., the combination of several measurement techniques to determine critical dimensions, is an increasingly important approach to meet the needs of the semiconductor industry. A proper use of hybrid metrology may yield not only more reliable estimates for the quantitative characterization of 3-D structures but also a more realistic estimation of the corresponding uncertainties. Recent developments at the National Institute of Standards and Technology (NIST) feature the combination of optical critical dimension (OCD) measurements and scanning electron microscope (SEM) results. The hybrid methodology offers the potential to make measurements of essential 3-D attributes that may not be otherwise feasible. However, combining techniques gives rise to essential challenges in error analysis and comparing results from different instrument models, especially the effect of systematic and highly correlated errors in the measurement on the χ 2 function that is minimized. Both hypothetical examples and measurement data are used to illustrate solutions to these challenges.

  1. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  2. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    PubMed Central

    Besada, Juan A.

    2017-01-01

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157

  3. Human Factors Research in Anesthesia Patient Safety

    PubMed Central

    Weinger, Matthew B.; Slagle, Jason

    2002-01-01

    Patient safety has become a major public concern. Human factors research in other high-risk fields has demonstrated how rigorous study of factors that affect job performance can lead to improved outcome and reduced errors after evidence-based redesign of tasks or systems. These techniques have increasingly been applied to the anesthesia work environment. This paper describes data obtained recently using task analysis and workload assessment during actual patient care and the use of cognitive task analysis to study clinical decision making. A novel concept of “non-routine events” is introduced and pilot data are presented. The results support the assertion that human factors research can make important contributions to patient safety. Information technologies play a key role in these efforts.

  4. Human factors research in anesthesia patient safety.

    PubMed Central

    Weinger, M. B.; Slagle, J.

    2001-01-01

    Patient safety has become a major public concern. Human factors research in other high-risk fields has demonstrated how rigorous study of factors that affect job performance can lead to improved outcome and reduced errors after evidence-based redesign of tasks or systems. These techniques have increasingly been applied to the anesthesia work environment. This paper describes data obtained recently using task analysis and workload assessment during actual patient care and the use of cognitive task analysis to study clinical decision making. A novel concept of "non-routine events" is introduced and pilot data are presented. The results support the assertion that human factors research can make important contributions to patient safety. Information technologies play a key role in these efforts. PMID:11825287

  5. The response function of modulated grid Faraday cup plasma instruments

    NASA Technical Reports Server (NTRS)

    Barnett, A.; Olbert, S.

    1986-01-01

    Modulated grid Faraday cup plasma analyzers are a very useful tool for making in situ measurements of space plasmas. One of their great attributes is that their simplicity permits their angular response function to be calculated theoretically. An expression is derived for this response function by computing the trajectories of the charged particles inside the cup. The Voyager Plasma Science (PLS) experiment is used as a specific example. Two approximations to the rigorous response function useful for data analysis are discussed. The theoretical formulas were tested by multi-sensor analysis of solar wind data. The tests indicate that the formulas represent the true cup response function for all angles of incidence with a maximum error of only a few percent.

  6. Masked Visual Analysis: Minimizing Type I Error in Visually Guided Single-Case Design for Communication Disorders

    PubMed Central

    Hitchcock, Elaine R.; Ferron, John

    2017-01-01

    Purpose Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. Method This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Conclusions Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders. PMID:28595354

  7. Masked Visual Analysis: Minimizing Type I Error in Visually Guided Single-Case Design for Communication Disorders.

    PubMed

    Byun, Tara McAllister; Hitchcock, Elaine R; Ferron, John

    2017-06-10

    Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders.

  8. High-order computer-assisted estimates of topological entropy

    NASA Astrophysics Data System (ADS)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  9. Effects of rigor status during high-pressure processing on the physical qualities of farm-raised abalone (Haliotis rufescens).

    PubMed

    Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I

    2015-01-01

    High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P < 0.05) less firm and chewy than pre-rigor processed irrespective of muscle type, processing time, or pressure. L values increased with pressure to 68.9 at 300 MPa for pre-rigor processed foot, 73.8 for post-rigor processed foot, 90.9 for pre-rigor processed adductor, and 89.0 for post-rigor processed adductor. Scanning electron microscopy images showed fraying of collagen fibers in processed adductor, but did not show pressure-induced compaction of the foot myofibrils. Post-rigor processed abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®

  10. Rigorous derivation of porous-media phase-field equations

    NASA Astrophysics Data System (ADS)

    Schmuck, Markus; Kalliadasis, Serafim

    2017-11-01

    The evolution of interfaces in Complex heterogeneous Multiphase Systems (CheMSs) plays a fundamental role in a wide range of scientific fields such as thermodynamic modelling of phase transitions, materials science, or as a computational tool for interfacial flow studies or material design. Here, we focus on phase-field equations in CheMSs such as porous media. To the best of our knowledge, we present the first rigorous derivation of error estimates for fourth order, upscaled, and nonlinear evolution equations. For CheMs with heterogeneity ɛ, we obtain the convergence rate ɛ 1 / 4 , which governs the error between the solution of the new upscaled formulation and the solution of the microscopic phase-field problem. This error behaviour has recently been validated computationally in. Due to the wide range of application of phase-field equations, we expect this upscaled formulation to allow for new modelling, analytic, and computational perspectives for interfacial transport and phase transformations in CheMSs. This work was supported by EPSRC, UK, through Grant Nos. EP/H034587/1, EP/L027186/1, EP/L025159/1, EP/L020564/1, EP/K008595/1, and EP/P011713/1 and from ERC via Advanced Grant No. 247031.

  11. Estimate of higher order ionospheric errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Hoque, M. Mainul; Jakowski, N.

    2008-10-01

    Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

  12. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS estimates for improved data assimilation results.

  13. A priori stability results for PFC

    NASA Astrophysics Data System (ADS)

    Rossiter, J. A.

    2017-02-01

    Despite its popularity in industry and obvious efficacy, predictive functional control has few rigorous a priori stability results in the literature. In many cases, common sense and intuition with some trial and error are the main design tools. This paper seeks to tackle that gap by providing some analysis of the control law and showing what forms of stability assurances can be given and how these depend on the user choices of coincidence horizon and desired closed-loop pole. The conditions are separated into necessary, but not sufficient conditions for stability and, conversely, sufficient but not necessary conditions. Numerical examples demonstrate the efficacy of these conditions and the ease of use.

  14. Graph Theory-Based Pinning Synchronization of Stochastic Complex Dynamical Networks.

    PubMed

    Li, Xiao-Jian; Yang, Guang-Hong

    2017-02-01

    This paper is concerned with the adaptive pinning synchronization problem of stochastic complex dynamical networks (CDNs). Based on algebraic graph theory and Lyapunov theory, pinning controller design conditions are derived, and the rigorous convergence analysis of synchronization errors in the probability sense is also conducted. Compared with the existing results, the topology structures of stochastic CDN are allowed to be unknown due to the use of graph theory. In particular, it is shown that the selection of nodes for pinning depends on the unknown lower bounds of coupling strengths. Finally, an example on a Chua's circuit network is given to validate the effectiveness of the theoretical results.

  15. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis.

    PubMed

    Jones, Reese E; Mandadapu, Kranthi K

    2012-04-21

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  16. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    NASA Astrophysics Data System (ADS)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  17. Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method

    NASA Astrophysics Data System (ADS)

    De Waal, Sybrand A.

    1996-07-01

    A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.

  18. Deriving Color-Color Transformations for VRI Photometry

    NASA Astrophysics Data System (ADS)

    Taylor, B. J.; Joner, M. D.

    2006-12-01

    In this paper, transformations between Cousins R-I and other indices are considered. New transformations to Cousins V-R and Johnson V-K are derived, a published transformation involving T1-T2 on the Washington system is rederived, and the basis for a transformation involving b-y is considered. In addition, a statistically rigorous procedure for deriving such transformations is presented and discussed in detail. Highlights of the discussion include (1) the need for statistical analysis when least-squares relations are determined and interpreted, (2) the permitted forms and best forms for such relations, (3) the essential role played by accidental errors, (4) the decision process for selecting terms to appear in the relations, (5) the use of plots of residuals, (6) detection of influential data, (7) a protocol for assessing systematic effects from absorption features and other sources, (8) the reasons for avoiding extrapolation of the relations, (9) a protocol for ensuring uniformity in data used to determine the relations, and (10) the derivation and testing of the accidental errors of those data. To put the last of these subjects in perspective, it is shown that rms errors for VRI photometry have been as small as 6 mmag for more than three decades and that standard errors for quantities derived from such photometry can be as small as 1 mmag or less.

  19. A discontinuous Poisson-Boltzmann equation with interfacial jump: homogenisation and residual error estimate.

    PubMed

    Fellner, Klemens; Kovtunenko, Victor A

    2016-01-01

    A nonlinear Poisson-Boltzmann equation with inhomogeneous Robin type boundary conditions at the interface between two materials is investigated. The model describes the electrostatic potential generated by a vector of ion concentrations in a periodic multiphase medium with dilute solid particles. The key issue stems from interfacial jumps, which necessitate discontinuous solutions to the problem. Based on variational techniques, we derive the homogenisation of the discontinuous problem and establish a rigorous residual error estimate up to the first-order correction.

  20. Quadratic Zeeman effect in hydrogen Rydberg states: Rigorous error estimates for energy eigenvalues, energy eigenfunctions, and oscillator strengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falsaperla, P.; Fonte, G.

    1994-10-01

    A variational method, based on some results due to T. Kato [Proc. Phys. Soc. Jpn. 4, 334 (1949)], and previously discussed is here applied to the hydrogen atom in uniform magnetic fields of tesla in order to calculate, with a rigorous error estimate, energy eigenvalues, energy eigenfunctions, and oscillator strengths relative to Rydberg states up to just below the field-free ionization threshold. Making use of a basis (parabolic Sturmian basis) with a size varying from 990 up to 5050, we obtain, over the energy range of [minus]190 to [minus]24 cm[sup [minus]1], all of the eigenvalues and a good part ofmore » the oscillator strengths with a remarkable accuracy. This, however, decreases with increasing excitation energy and, thus, above [similar to][minus]24 cm[sup [minus]1], we obtain results of good accuracy only for eigenvalues ranging up to [similar to][minus]12 cm[sup [minus]1].« less

  1. Imaginary-frequency polarizability and van der Waals force constants of two-electron atoms, with rigorous bounds

    NASA Technical Reports Server (NTRS)

    Glover, R. M.; Weinhold, F.

    1977-01-01

    Variational functionals of Braunn and Rebane (1972) for the imagery-frequency polarizability (IFP) have been generalized by the method of Gramian inequalities to give rigorous upper and lower bounds, valid even when the true (but unknown) unperturbed wavefunction must be represented by a variational approximation. Using these formulas in conjunction with flexible variational trial functions, tight error bounds are computed for the IFP and the associated two- and three-body van der Waals interaction constants of the ground 1(1S) and metastable 2(1,3S) states of He and Li(+). These bounds generally establish the ground-state properties to within a fraction of a per cent and metastable properties to within a few per cent, permitting a comparative assessment of competing theoretical methods at this level of accuracy. Unlike previous 'error bounds' for these properties, the present results have a completely a priori theoretical character, with no empirical input data.

  2. Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis

    NASA Technical Reports Server (NTRS)

    Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.

    2015-01-01

    This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.

  3. A numerical identifiability test for state-space models--application to optimal experimental design.

    PubMed

    Hidalgo, M E; Ayesa, E

    2001-01-01

    This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.

  4. Convergence analysis of surrogate-based methods for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhang, Yuan-Xiang

    2017-12-01

    The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.

  5. Enumerating Sparse Organisms in Ships’ Ballast Water: Why Counting to 10 Is Not So Easy

    PubMed Central

    2011-01-01

    To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships’ ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed. PMID:21434685

  6. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    PubMed

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  7. Enumerating sparse organisms in ships' ballast water: why counting to 10 is not so easy.

    PubMed

    Miller, A Whitman; Frazier, Melanie; Smith, George E; Perry, Elgin S; Ruiz, Gregory M; Tamburri, Mario N

    2011-04-15

    To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships' ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed.

  8. Qualitative and quantitative assessment of Illumina's forensic STR and SNP kits on MiSeq FGx™.

    PubMed

    Sharma, Vishakha; Chow, Hoi Yan; Siegel, Donald; Wurmbach, Elisa

    2017-01-01

    Massively parallel sequencing (MPS) is a powerful tool transforming DNA analysis in multiple fields ranging from medicine, to environmental science, to evolutionary biology. In forensic applications, MPS offers the ability to significantly increase the discriminatory power of human identification as well as aid in mixture deconvolution. However, before the benefits of any new technology can be employed, a thorough evaluation of its quality, consistency, sensitivity, and specificity must be rigorously evaluated in order to gain a detailed understanding of the technique including sources of error, error rates, and other restrictions/limitations. This extensive study assessed the performance of Illumina's MiSeq FGx MPS system and ForenSeq™ kit in nine experimental runs including 314 reaction samples. In-depth data analysis evaluated the consequences of different assay conditions on test results. Variables included: sample numbers per run, targets per run, DNA input per sample, and replications. Results are presented as heat maps revealing patterns for each locus. Data analysis focused on read numbers (allele coverage), drop-outs, drop-ins, and sequence analysis. The study revealed that loci with high read numbers performed better and resulted in fewer drop-outs and well balanced heterozygous alleles. Several loci were prone to drop-outs which led to falsely typed homozygotes and therefore to genotype errors. Sequence analysis of allele drop-in typically revealed a single nucleotide change (deletion, insertion, or substitution). Analyses of sequences, no template controls, and spurious alleles suggest no contamination during library preparation, pooling, and sequencing, but indicate that sequencing or PCR errors may have occurred due to DNA polymerase infidelities. Finally, we found utilizing Illumina's FGx System at recommended conditions does not guarantee 100% outcomes for all samples tested, including the positive control, and required manual editing due to low read numbers and/or allele drop-in. These findings are important for progressing towards implementation of MPS in forensic DNA testing.

  9. Online Recorded Data-Based Composite Neural Control of Strict-Feedback Systems With Application to Hypersonic Flight Dynamics.

    PubMed

    Xu, Bin; Yang, Daipeng; Shi, Zhongke; Pan, Yongping; Chen, Badong; Sun, Fuchun

    2017-09-25

    This paper investigates the online recorded data-based composite neural control of uncertain strict-feedback systems using the backstepping framework. In each step of the virtual control design, neural network (NN) is employed for uncertainty approximation. In previous works, most designs are directly toward system stability ignoring the fact how the NN is working as an approximator. In this paper, to enhance the learning ability, a novel prediction error signal is constructed to provide additional correction information for NN weight update using online recorded data. In this way, the neural approximation precision is highly improved, and the convergence speed can be faster. Furthermore, the sliding mode differentiator is employed to approximate the derivative of the virtual control signal, and thus, the complex analysis of the backstepping design can be avoided. The closed-loop stability is rigorously established, and the boundedness of the tracking error can be guaranteed. Through simulation of hypersonic flight dynamics, the proposed approach exhibits better tracking performance.

  10. Optimizing Hybrid Metrology: Rigorous Implementation of Bayesian and Combined Regression

    PubMed Central

    Henn, Mark-Alexander; Silver, Richard M.; Villarrubia, John S.; Zhang, Nien Fan; Zhou, Hui; Barnes, Bryan M.; Ming, Bin; Vladár, András E.

    2015-01-01

    Hybrid metrology, e.g., the combination of several measurement techniques to determine critical dimensions, is an increasingly important approach to meet the needs of the semiconductor industry. A proper use of hybrid metrology may yield not only more reliable estimates for the quantitative characterization of 3-D structures but also a more realistic estimation of the corresponding uncertainties. Recent developments at the National Institute of Standards and Technology (NIST) feature the combination of optical critical dimension (OCD) measurements and scanning electron microscope (SEM) results. The hybrid methodology offers the potential to make measurements of essential 3-D attributes that may not be otherwise feasible. However, combining techniques gives rise to essential challenges in error analysis and comparing results from different instrument models, especially the effect of systematic and highly correlated errors in the measurement on the χ2 function that is minimized. Both hypothetical examples and measurement data are used to illustrate solutions to these challenges. PMID:26681991

  11. QTest: Quantitative Testing of Theories of Binary Choice.

    PubMed

    Regenwetter, Michel; Davis-Stober, Clintin P; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William

    2014-01-01

    The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of "Random Cumulative Prospect Theory." A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences.

  12. Design and optimization of a high-efficiency array generator in the mid-IR with binary subwavelength grooves.

    PubMed

    Bloom, Guillaume; Larat, Christian; Lallier, Eric; Lee-Bouhours, Mane-Si Laure; Loiseaux, Brigitte; Huignard, Jean-Pierre

    2011-02-10

    We have designed a high-efficiency array generator composed of subwavelength grooves etched in a GaAs substrate for operation at 4.5 μm. The method used combines rigorous coupled wave analysis with an optimization algorithm. The optimized beam splitter has both a high efficiency (∼96%) and a good intensity uniformity (∼0.2%). The fabrication error tolerances are numerically calculated, and it is shown that this subwavelength array generator could be fabricated with current electron beam writers and inductively coupled plasma etching. Finally, we studied the effect of a simple and realistic antireflection coating on the performance of the beam splitter.

  13. A mathematical approach to beam matching

    PubMed Central

    Manikandan, A; Nandy, M; Gossman, M S; Sureka, C S; Ray, A; Sujatha, N

    2013-01-01

    Objective: This report provides the mathematical commissioning instructions for the evaluation of beam matching between two different linear accelerators. Methods: Test packages were first obtained including an open beam profile, a wedge beam profile and a depth–dose curve, each from a 10×10 cm2 beam. From these plots, a spatial error (SE) and a percentage dose error were introduced to form new plots. These three test package curves and the associated error curves were then differentiated in space with respect to dose for a first and second derivative to determine the slope and curvature of each data set. The derivatives, also known as bandwidths, were analysed to determine the level of acceptability for the beam matching test described in this study. Results: The open and wedged beam profiles and depth–dose curve in the build-up region were determined to match within 1% dose error and 1-mm SE at 71.4% and 70.8% for of all points, respectively. For the depth–dose analysis specifically, beam matching was achieved for 96.8% of all points at 1%/1 mm beyond the depth of maximum dose. Conclusion: To quantify the beam matching procedure in any clinic, the user needs to merely generate test packages from their reference linear accelerator. It then follows that if the bandwidths are smooth and continuous across the profile and depth, there is greater likelihood of beam matching. Differentiated spatial and percentage variation analysis is appropriate, ideal and accurate for this commissioning process. Advances in knowledge: We report a mathematically rigorous formulation for the qualitative evaluation of beam matching between linear accelerators. PMID:23995874

  14. The Aharonov-Bohm effect and Tonomura et al. experiments: Rigorous results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballesteros, Miguel; Weder, Ricardo

    The Aharonov-Bohm effect is a fundamental issue in physics. It describes the physically important electromagnetic quantities in quantum mechanics. Its experimental verification constitutes a test of the theory of quantum mechanics itself. The remarkable experiments of Tonomura et al. ['Observation of Aharonov-Bohm effect by electron holography', Phys. Rev. Lett 48, 1443 (1982) and 'Evidence for Aharonov-Bohm effect with magnetic field completely shielded from electron wave', Phys. Rev. Lett 56, 792 (1986)] are widely considered as the only experimental evidence of the physical existence of the Aharonov-Bohm effect. Here we give the first rigorous proof that the classical ansatz of Aharonovmore » and Bohm of 1959 ['Significance of electromagnetic potentials in the quantum theory', Phys. Rev. 115, 485 (1959)], that was tested by Tonomura et al., is a good approximation to the exact solution to the Schroedinger equation. This also proves that the electron, that is, represented by the exact solution, is not accelerated, in agreement with the recent experiment of Caprez et al. in 2007 ['Macroscopic test of the Aharonov-Bohm effect', Phys. Rev. Lett. 99, 210401 (2007)], that shows that the results of the Tonomura et al. experiments can not be explained by the action of a force. Under the assumption that the incoming free electron is a Gaussian wave packet, we estimate the exact solution to the Schroedinger equation for all times. We provide a rigorous, quantitative error bound for the difference in norm between the exact solution and the Aharonov-Bohm Ansatz. Our bound is uniform in time. We also prove that on the Gaussian asymptotic state the scattering operator is given by a constant phase shift, up to a quantitative error bound that we provide. Our results show that for intermediate size electron wave packets, smaller than the ones used in the Tonomura et al. experiments, quantum mechanics predicts the results observed by Tonomura et al. with an error bound smaller than 10{sup -99}. It would be quite interesting to perform experiments with electron wave packets of intermediate size. Furthermore, we provide a physical interpretation of our error bound.« less

  15. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.

  16. Mock jurors' use of error rates in DNA database trawls.

    PubMed

    Scurich, Nicholas; John, Richard S

    2013-12-01

    Forensic science is not infallible, as data collected by the Innocence Project have revealed. The rate at which errors occur in forensic DNA testing-the so-called "gold standard" of forensic science-is not currently known. This article presents a Bayesian analysis to demonstrate the profound impact that error rates have on the probative value of a DNA match. Empirical evidence on whether jurors are sensitive to this effect is equivocal: Studies have typically found they are not, while a recent, methodologically rigorous study found that they can be. This article presents the results of an experiment that examined this issue within the context of a database trawl case in which one DNA profile was tested against a multitude of profiles. The description of the database was manipulated (i.e., "medical" or "offender" database, or not specified) as was the rate of error (i.e., one-in-10 or one-in-1,000). Jury-eligible participants were nearly twice as likely to convict in the offender database condition compared to the condition not specified. The error rates did not affect verdicts. Both factors, however, affected the perception of the defendant's guilt, in the expected direction, although the size of the effect was meager compared to Bayesian prescriptions. The results suggest that the disclosure of an offender database to jurors might constitute prejudicial evidence, and calls for proficiency testing in forensic science as well as training of jurors are echoed. (c) 2013 APA, all rights reserved

  17. A rigorous approach to self-checking programming

    NASA Technical Reports Server (NTRS)

    Hua, Kien A.; Abraham, Jacob A.

    1986-01-01

    Self-checking programming is shown to be an effective concurrent error detection technique. The reliability of a self-checking program however relies on the quality of its assertion statements. A self-checking program written without formal guidelines could provide a poor coverage of the errors. A constructive technique for self-checking programming is presented. A Structured Program Design Language (SPDL) suitable for self-checking software development is defined. A set of formal rules, was also developed, that allows the transfromation of SPDL designs into self-checking designs to be done in a systematic manner.

  18. Volumetric breast density measurement: sensitivity analysis of a relative physics approach

    PubMed Central

    Lau, Susie; Abdul Aziz, Yang Faridah

    2016-01-01

    Objective: To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. Methods: 3317 raw digital mammograms were processed with Volpara® (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Results: Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Conclusion: Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Advances in knowledge: Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be. PMID:27452264

  19. Volumetric breast density measurement: sensitivity analysis of a relative physics approach.

    PubMed

    Lau, Susie; Ng, Kwan Hoong; Abdul Aziz, Yang Faridah

    2016-10-01

    To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. 3317 raw digital mammograms were processed with Volpara(®) (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be.

  20. Photomask CD and LER characterization using Mueller matrix spectroscopic ellipsometry

    NASA Astrophysics Data System (ADS)

    Heinrich, A.; Dirnstorfer, I.; Bischoff, J.; Meiner, K.; Ketelsen, H.; Richter, U.; Mikolajick, T.

    2014-10-01

    Critical dimension and line edge roughness on photomask arrays are determined with Mueller matrix spectroscopic ellipsometry. Arrays with large sinusoidal perturbations are measured for different azimuth angels and compared with simulations based on rigorous coupled wave analysis. Experiment and simulation show that line edge roughness leads to characteristic changes in the different Mueller matrix elements. The influence of line edge roughness is interpreted as an increase of isotropic character of the sample. The changes in the Mueller matrix elements are very similar when the arrays are statistically perturbed with rms roughness values in the nanometer range suggesting that the results on the sinusoidal test structures are also relevant for "real" mask errors. Critical dimension errors and line edge roughness have similar impact on the SE MM measurement. To distinguish between both deviations, a strategy based on the calculation of sensitivities and correlation coefficients for all Mueller matrix elements is shown. The Mueller matrix elements M13/M31 and M34/M43 are the most suitable elements due to their high sensitivities to critical dimension errors and line edge roughness and, at the same time, to a low correlation coefficient between both influences. From the simulated sensitivities, it is estimated that the measurement accuracy has to be in the order of 0.01 and 0.001 for the detection of 1 nm critical dimension error and 1 nm line edge roughness, respectively.

  1. Online Psychology: Trial and Error in Course Development

    ERIC Educational Resources Information Center

    Harman, Marsha J.

    2009-01-01

    Online courses appear to be the future if colleges and universities choose to increase enrollments with students who need more flexibility in scheduling. The challenge has been to create a course that is rigorous with the limitations to physical presence of the instructor and the parameters inherent in technological delivery. This article relates…

  2. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  3. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    NASA Astrophysics Data System (ADS)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  4. Re-use of pilot data and interim analysis of pivotal data in MRMC studies: a simulation study

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Samuelson, Frank; Sahiner, Berkman; Petrick, Nicholas

    2017-03-01

    Novel medical imaging devices are often evaluated with multi-reader multi-case (MRMC) studies in which radiologists read images of patient cases for a specified clinical task (e.g., cancer detection). A pilot study is often used to measure the effect size and variance parameters that are necessary for sizing a pivotal study (including sizing readers, non-diseased and diseased cases). Due to the practical difficulty of collecting patient cases or recruiting clinical readers, some investigators attempt to include the pilot data as part of their pivotal study. In other situations, some investigators attempt to perform an interim analysis of their pivotal study data based upon which the sample sizes may be re-estimated. Re-use of the pilot data or interim analyses of the pivotal data may inflate the type I error of the pivotal study. In this work, we use the Roe and Metz model to simulate MRMC data under the null hypothesis (i.e., two devices have equal diagnostic performance) and investigate the type I error rate for several practical designs involving re-use of pilot data or interim analysis of pivotal data. Our preliminary simulation results indicate that, under the simulation conditions we investigated, the inflation of type I error is none or only marginal for some design strategies (e.g., re-use of patient data without re-using readers, and size re-estimation without using the effect-size estimated in the interim analysis). Upon further verifications, these are potentially useful design methods in that they may help make a study less burdensome and have a better chance to succeed without substantial loss of the statistical rigor.

  5. The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis

    PubMed Central

    2014-01-01

    Background The Health Information Technology for Economic and Clinical Health (HITECH) Act subsidizes implementation by hospitals of electronic health records with computerized provider order entry (CPOE), which may reduce patient injuries caused by medication errors (preventable adverse drug events, pADEs). Effects on pADEs have not been rigorously quantified, and effects on medication errors have been variable. The objectives of this analysis were to assess the effectiveness of CPOE at reducing pADEs in hospital-related settings, and examine reasons for heterogeneous effects on medication errors. Methods Articles were identified using MEDLINE, Cochrane Library, Econlit, web-based databases, and bibliographies of previous systematic reviews (September 2013). Eligible studies compared CPOE with paper-order entry in acute care hospitals, and examined diverse pADEs or medication errors. Studies on children or with limited event-detection methods were excluded. Two investigators extracted data on events and factors potentially associated with effectiveness. We used random effects models to pool data. Results Sixteen studies addressing medication errors met pooling criteria; six also addressed pADEs. Thirteen studies used pre-post designs. Compared with paper-order entry, CPOE was associated with half as many pADEs (pooled risk ratio (RR) = 0.47, 95% CI 0.31 to 0.71) and medication errors (RR = 0.46, 95% CI 0.35 to 0.60). Regarding reasons for heterogeneous effects on medication errors, five intervention factors and two contextual factors were sufficiently reported to support subgroup analyses or meta-regression. Differences between commercial versus homegrown systems, presence and sophistication of clinical decision support, hospital-wide versus limited implementation, and US versus non-US studies were not significant, nor was timing of publication. Higher baseline rates of medication errors predicted greater reductions (P < 0.001). Other context and implementation variables were seldom reported. Conclusions In hospital-related settings, implementing CPOE is associated with a greater than 50% decline in pADEs, although the studies used weak designs. Decreases in medication errors are similar and robust to variations in important aspects of intervention design and context. This suggests that CPOE implementation, as subsidized under the HITECH Act, may benefit public health. More detailed reporting of the context and process of implementation could shed light on factors associated with greater effectiveness. PMID:24894078

  6. X-Ray Processing of ChaMPlane Fields: Methods and Initial Results for Selected Anti-Galactic Center Fields

    NASA Astrophysics Data System (ADS)

    Hong, JaeSub; van den Berg, Maureen; Schlegel, Eric M.; Grindlay, Jonathan E.; Koenig, Xavier; Laycock, Silas; Zhao, Ping

    2005-12-01

    We describe the X-ray analysis procedure of the ongoing Chandra Multiwavelength Plane (ChaMPlane) Survey and report the initial results from the analysis of 15 selected anti-Galactic center observations (90deg

  7. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  8. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  9. In-plane, flexural, twisting and thickness-shear coefficients for stiffness and damping of a monolayer filamentary composite, part 1

    NASA Technical Reports Server (NTRS)

    Bert, C. W.; Chang, S.

    1972-01-01

    Elastic and damping analyses resulting in determinations of the various stiffnesses and associated loss tangents for the complete characterization of the elastic and damping behavior of a monofilament composite layer are presented. For the determination of the various stiffnesses, either an elementary mechanics-of-materials formulation or a more rigorous mixed-boundary-value elasticity formulation is used. The solution for the latter formulation is obtained by means of the boundary-point least-square error technique. Kimball-Lovell type damping is assumed for each of the constituent materials. For determining the loss tangents associated with the various stiffnesses, either the viscoelastic correspondence principle or an energy analysis based on the appropriate elastic stress distribution is used.

  10. Sliding mode control for Mars entry based on extended state observer

    NASA Astrophysics Data System (ADS)

    Lu, Kunfeng; Xia, Yuanqing; Shen, Ganghui; Yu, Chunmei; Zhou, Liuyu; Zhang, Lijun

    2017-11-01

    This paper addresses high-precision Mars entry guidance and control approach via sliding mode control (SMC) and Extended State Observer (ESO). First, differential flatness (DF) approach is applied to the dynamic equations of the entry vehicle to represent the state variables more conveniently. Then, the presented SMC law can guarantee the property of finite-time convergence of tracking error, which requires no information on high uncertainties that are estimated by ESO, and the rigorous proof of tracking error convergence is given. Finally, Monte Carlo simulation results are presented to demonstrate the effectiveness of the suggested approach.

  11. Power of Statistical Tests Used to Address Nonresponse Error in the "Journal of Agricultural Education"

    ERIC Educational Resources Information Center

    Johnson, Donald M.; Shoulders, Catherine W.

    2017-01-01

    As members of a profession committed to the dissemination of rigorous research pertaining to agricultural education, authors publishing in the Journal of Agricultural Education (JAE) must seek methods to evaluate and, when necessary, improve their research methods. The purpose of this study was to describe how authors of manuscripts published in…

  12. Satellite SAR geocoding with refined RPC model

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Balz, Timo; Liao, Mingsheng

    2012-04-01

    Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.

  13. Dendritic solidification. I - Analysis of current theories and models. II - A model for dendritic growth under an imposed thermal gradient

    NASA Technical Reports Server (NTRS)

    Laxmanan, V.

    1985-01-01

    A critical review of the present dendritic growth theories and models is presented. Mathematically rigorous solutions to dendritic growth are found to rely on an ad hoc assumption that dendrites grow at the maximum possible growth rate. This hypothesis is found to be in error and is replaced by stability criteria which consider the conditions under which a dendrite tip advances in a stable fashion in a liquid. The important elements of a satisfactory model for dendritic solidification are summarized and a theoretically consistent model for dendritic growth under an imposed thermal gradient is proposed and described. The model is based on the modification of an analysis due to Burden and Hunt (1974) and predicts correctly in all respects, the transition from a dendritic to a planar interface at both very low and very large growth rates.

  14. Resilient filtering for time-varying stochastic coupling networks under the event-triggering scheduling

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Liang, Jinling; Dobaie, Abdullah M.

    2018-07-01

    The resilient filtering problem is considered for a class of time-varying networks with stochastic coupling strengths. An event-triggered strategy is adopted to save the network resources by scheduling the signal transmission from the sensors to the filters based on certain prescribed rules. Moreover, the filter parameters to be designed are subject to gain perturbations. The primary aim of the addressed problem is to determine a resilient filter that ensures an acceptable filtering performance for the considered network with event-triggering scheduling. To handle such an issue, an upper bound on the estimation error variance is established for each node according to the stochastic analysis. Subsequently, the resilient filter is designed by locally minimizing the derived upper bound at each iteration. Moreover, rigorous analysis shows the monotonicity of the minimal upper bound regarding the triggering threshold. Finally, a simulation example is presented to show effectiveness of the established filter scheme.

  15. Effects of monetary reward and punishment on information checking behaviour.

    PubMed

    Li, Simon Y W; Cox, Anna L; Or, Calvin; Blandford, Ann

    2016-03-01

    Two experiments were conducted to examine whether checking one's own work can be motivated by monetary reward and punishment. Participants were randomly assigned to one of three conditions: a flat-rate payment for completing the task (Control); payment increased for error-free performance (Reward); payment decreased for error performance (Punishment). Experiment 1 (N = 90) was conducted with liberal arts students, using a general data-entry task. Experiment 2 (N = 90) replicated Experiment 1 with clinical students and a safety-critical 'cover story' for the task. In both studies, Reward and Punishment resulted in significantly fewer errors, more frequent and longer checking, than Control. No such differences were obtained between the Reward and Punishment conditions. It is concluded that error consequences in terms of monetary reward and punishment can result in more accurate task performance and more rigorous checking behaviour than errors without consequences. However, whether punishment is more effective than reward, or vice versa, remains inconclusive. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  16. Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1977-01-01

    The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.

  17. Ghost imaging based on Pearson correlation coefficients

    NASA Astrophysics Data System (ADS)

    Yu, Wen-Kai; Yao, Xu-Ri; Liu, Xue-Feng; Li, Long-Zhen; Zhai, Guang-Jie

    2015-05-01

    Correspondence imaging is a new modality of ghost imaging, which can retrieve a positive/negative image by simple conditional averaging of the reference frames that correspond to relatively large/small values of the total intensity measured at the bucket detector. Here we propose and experimentally demonstrate a more rigorous and general approach in which a ghost image is retrieved by calculating a Pearson correlation coefficient between the bucket detector intensity and the brightness at a given pixel of the reference frames, and at the next pixel, and so on. Furthermore, we theoretically provide a statistical interpretation of these two imaging phenomena, and explain how the error depends on the sample size and what kind of distribution the error obeys. According to our analysis, the image signal-to-noise ratio can be greatly improved and the sampling number reduced by means of our new method. Project supported by the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2013YQ030595) and the National High Technology Research and Development Program of China (Grant No. 2013AA122902).

  18. A climate trend analysis of Kenya-August 2010

    USGS Publications Warehouse

    Funk, Christopher C.

    2010-01-01

    Introduction This brief report draws from a multi-year effort by the United States Agency for International Development's Famine Early Warning System Network (FEWS NET) to monitor and map rainfall and temperature trends over the last 50 years (1960-2009) in Kenya. Observations from seventy rainfall gauges and seventeen air temperature stations were analyzed for the long rains period, corresponding to March through June (MAMJ). The data were quality controlled, converted into 1960-2009 trend estimates, and interpolated using a rigorous geo-statistical technique (kriging). Kriging produces standard error estimates, and these can be used to assess the relative spatial accuracy of the identified trends. Dividing the trends by the associated errors allows us to identify the relative certainty of our estimates (Funk and others, 2005; Verdin and others, 2005; Brown and Funk, 2008; Funk and Verdin, 2009). Assuming that the same observed trends persist, regardless of whether or not these changes are due to anthropogenic or natural cyclical causes, these results can be extended to 2025, providing critical, and heretofore missing information about the types and locations of adaptation efforts that may be required to improve food security.

  19. Mathematical models and photogrammetric exploitation of image sensing

    NASA Astrophysics Data System (ADS)

    Puatanachokchai, Chokchai

    Mathematical models of image sensing are generally categorized into physical/geometrical sensor models and replacement sensor models. While the former is determined from image sensing geometry, the latter is based on knowledge of the physical/geometric sensor models and on using such models for its implementation. The main thrust of this research is in replacement sensor models which have three important characteristics: (1) Highly accurate ground-to-image functions; (2) Rigorous error propagation that is essentially of the same accuracy as the physical model; and, (3) Adjustability, or the ability to upgrade the replacement sensor model parameters when additional control information becomes available after the replacement sensor model has replaced the physical model. In this research, such replacement sensor models are considered as True Replacement Models or TRMs. TRMs provide a significant advantage of universality, particularly for image exploitation functions. There have been several writings about replacement sensor models, and except for the so called RSM (Replacement Sensor Model as a product described in the Manual of Photogrammetry), almost all of them pay very little or no attention to errors and their propagation. This is because, it is suspected, the few physical sensor parameters are usually replaced by many more parameters, thus presenting a potential error estimation difficulty. The third characteristic, adjustability, is perhaps the most demanding. It provides an equivalent flexibility to that of triangulation using the physical model. Primary contributions of this thesis include not only "the eigen-approach", a novel means of replacing the original sensor parameter covariance matrices at the time of estimating the TRM, but also the implementation of the hybrid approach that combines the eigen-approach with the added parameters approach used in the RSM. Using either the eigen-approach or the hybrid approach, rigorous error propagation can be performed during image exploitation. Further, adjustability can be performed when additional control information becomes available after the TRM has been implemented. The TRM is shown to apply to imagery from sensors having different geometries, including an aerial frame camera, a spaceborne linear array sensor, an airborne pushbroom sensor, and an airborne whiskbroom sensor. TRM results show essentially negligible differences as compared to those from rigorous physical sensor models, both for geopositioning from single and overlapping images. Simulated as well as real image data are used to address all three characteristics of the TRM.

  20. ON MODEL SELECTION STRATEGIES TO IDENTIFY GENES UNDERLYING BINARY TRAITS USING GENOME-WIDE ASSOCIATION DATA.

    PubMed

    Wu, Zheyang; Zhao, Hongyu

    2012-01-01

    For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies.

  1. ON MODEL SELECTION STRATEGIES TO IDENTIFY GENES UNDERLYING BINARY TRAITS USING GENOME-WIDE ASSOCIATION DATA

    PubMed Central

    Wu, Zheyang; Zhao, Hongyu

    2013-01-01

    For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies. PMID:23956610

  2. Rigor in electronic health record knowledge representation: Lessons learned from a SNOMED CT clinical content encoding exercise.

    PubMed

    Monsen, Karen A; Finn, Robert S; Fleming, Thea E; Garner, Erin J; LaValla, Amy J; Riemer, Judith G

    2016-01-01

    Rigor in clinical knowledge representation is necessary foundation for meaningful interoperability, exchange and reuse of electronic health record (EHR) data. It is critical for clinicians to understand principles and implications of using clinical standards for knowledge representation within EHRs. To educate clinicians and students about knowledge representation and to evaluate their success of applying the manual lookups method for assigning Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) concept identifiers using formally mapped concepts from the Omaha System interface terminology. Clinicians who were students in a doctoral nursing program conducted 21 lookups for Omaha System terms in publicly available SNOMED CT browsers. Lookups were deemed successful if results matched exactly with the corresponding code from the January 2013 SNOMED CT-Omaha System terminology cross-map. Of the 21 manual lookups attempted, 12 (57.1%) were successful. Errors were due to semantic gaps differences in granularity and synonymy or partial term matching. Achieving rigor in clinical knowledge representation across settings, vendors and health systems is a globally recognized challenge. Cross-maps have potential to improve rigor in SNOMED CT encoding of clinical data. Further research is needed to evaluate outcomes of using of terminology cross-maps to encode clinical terms with SNOMED CT concept identifiers based on interface terminologies.

  3. QTest: Quantitative Testing of Theories of Binary Choice

    PubMed Central

    Regenwetter, Michel; Davis-Stober, Clintin P.; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William

    2014-01-01

    The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of “Random Cumulative Prospect Theory.” A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences. PMID:24999495

  4. Tutorial on Reed-Solomon error correction coding

    NASA Technical Reports Server (NTRS)

    Geisel, William A.

    1990-01-01

    This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.

  5. OR14-V-Uncertainty-PD2La Uncertainty Quantification for Nuclear Safeguards and Nondestructive Assay Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Andrew D.; Croft, Stephen; McElroy, Robert Dennis

    2017-08-01

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically provide error bars and also partition total uncertainty into “random” and “systematic” components so that, for example, an error bar can be developed for the total mass estimate in multiple items. Uncertainty Quantification (UQ) for NDA has always been important, but itmore » is recognized that greater rigor is needed and achievable using modern statistical methods.« less

  6. Observations of fallibility in applications of modern programming methodologies

    NASA Technical Reports Server (NTRS)

    Gerhart, S. L.; Yelowitz, L.

    1976-01-01

    Errors, inconsistencies, or confusing points are noted in a variety of published algorithms, many of which are being used as examples in formulating or teaching principles of such modern programming methodologies as formal specification, systematic construction, and correctness proving. Common properties of these points of contention are abstracted. These properties are then used to pinpoint possible causes of the errors and to formulate general guidelines which might help to avoid further errors. The common characteristic of mathematical rigor and reasoning in these examples is noted, leading to some discussion about fallibility in mathematics, and its relationship to fallibility in these programming methodologies. The overriding goal is to cast a more realistic perspective on the methodologies, particularly with respect to older methodologies, such as testing, and to provide constructive recommendations for their improvement.

  7. Reexamining protein–protein and protein–solvent interactions from Kirkwood-Buff analysis of light scattering in multi-component solutions

    PubMed Central

    Blanco, Marco A.; Sahin, Erinc; Li, Yi; Roberts, Christopher J.

    2011-01-01

    The classic analysis of Rayleigh light scattering (LS) is re-examined for multi-component protein solutions, within the context of Kirkwood-Buff (KB) theory as well as a more generalized canonical treatment. Significant differences arise when traditional treatments that approximate constant pressure and neglect concentration fluctuations in one or more (co)solvent∕co-solute species are compared with more rigorous treatments at constant volume and with all species free to fluctuate. For dilute solutions, it is shown that LS can be used to rigorously and unambiguously obtain values for the osmotic second virial coefficient (B22), in contrast with recent arguments regarding protein interactions deduced from LS experiments. For more concentrated solutions, it is shown that conventional analysis over(under)-estimates the magnitude of B22 for significantly repulsive(attractive) conditions, and that protein-protein KB integrals (G22) are the more relevant quantity obtainable from LS. Published data for α–chymotrypsinogen A and a series of monoclonal antibodies at different pH and salt concentrations are re-analyzed using traditional and new treatments. The results illustrate that while traditional analysis may be sufficient if one is interested in only the sign of B22 or G22, the quantitative values can be significantly in error. A simple approach is illustrated for determining whether protein concentration (c2) is sufficiently dilute for B22 to apply, and for correcting B22 values from traditional LS regression at higher c2 values. The apparent molecular weight M2, app obtained from LS is shown to generally not be equal to the true molecular weight, with the differences arising from a combination of protein-solute and protein-cosolute interactions that may, in principle, also be determined from LS. PMID:21682538

  8. On the probability density function and characteristic function moments of image steganalysis in the log prediction error wavelet subband

    NASA Astrophysics Data System (ADS)

    Bao, Zhenkun; Li, Xiaolong; Luo, Xiangyang

    2017-01-01

    Extracting informative statistic features is the most essential technical issue of steganalysis. Among various steganalysis methods, probability density function (PDF) and characteristic function (CF) moments are two important types of features due to the excellent ability for distinguishing the cover images from the stego ones. The two types of features are quite similar in definition. The only difference is that the PDF moments are computed in the spatial domain, while the CF moments are computed in the Fourier-transformed domain. Then, the comparison between PDF and CF moments is an interesting question of steganalysis. Several theoretical results have been derived, and CF moments are proved better than PDF moments in some cases. However, in the log prediction error wavelet subband of wavelet decomposition, some experiments show that the result is opposite and lacks a rigorous explanation. To solve this problem, a comparison result based on the rigorous proof is presented: the first-order PDF moment is proved better than the CF moment, while the second-order CF moment is better than the PDF moment. It tries to open the theoretical discussion on steganalysis and the question of finding suitable statistical features.

  9. Calibration and error analysis of metal-oxide-semiconductor field-effect transistor dosimeters for computed tomography radiation dosimetry.

    PubMed

    Trattner, Sigal; Prinsen, Peter; Wiegert, Jens; Gerland, Elazar-Lars; Shefer, Efrat; Morton, Tom; Thompson, Carla M; Yagil, Yoad; Cheng, Bin; Jambawalikar, Sachin; Al-Senan, Rani; Amurao, Maxwell; Halliburton, Sandra S; Einstein, Andrew J

    2017-12-01

    Metal-oxide-semiconductor field-effect transistors (MOSFETs) serve as a helpful tool for organ radiation dosimetry and their use has grown in computed tomography (CT). While different approaches have been used for MOSFET calibration, those using the commonly available 100 mm pencil ionization chamber have not incorporated measurements performed throughout its length, and moreover, no previous work has rigorously evaluated the multiple sources of error involved in MOSFET calibration. In this paper, we propose a new MOSFET calibration approach to translate MOSFET voltage measurements into absorbed dose from CT, based on serial measurements performed throughout the length of a 100-mm ionization chamber, and perform an analysis of the errors of MOSFET voltage measurements and four sources of error in calibration. MOSFET calibration was performed at two sites, to determine single calibration factors for tube potentials of 80, 100, and 120 kVp, using a 100-mm-long pencil ion chamber and a cylindrical computed tomography dose index (CTDI) phantom of 32 cm diameter. The dose profile along the 100-mm ion chamber axis was sampled in 5 mm intervals by nine MOSFETs in the nine holes of the CTDI phantom. Variance of the absorbed dose was modeled as a sum of the MOSFET voltage measurement variance and the calibration factor variance, the latter being comprised of three main subcomponents: ionization chamber reading variance, MOSFET-to-MOSFET variation and a contribution related to the fact that the average calibration factor of a few MOSFETs was used as an estimate for the average value of all MOSFETs. MOSFET voltage measurement error was estimated based on sets of repeated measurements. The calibration factor overall voltage measurement error was calculated from the above analysis. Calibration factors determined were close to those reported in the literature and by the manufacturer (~3 mV/mGy), ranging from 2.87 to 3.13 mV/mGy. The error σ V of a MOSFET voltage measurement was shown to be proportional to the square root of the voltage V: σV=cV where c = 0.11 mV. A main contributor to the error in the calibration factor was the ionization chamber reading error with 5% error. The usage of a single calibration factor for all MOSFETs introduced an additional error of about 5-7%, depending on the number of MOSFETs that were used to determine the single calibration factor. The expected overall error in a high-dose region (~30 mGy) was estimated to be about 8%, compared to 6% when an individual MOSFET calibration was performed. For a low-dose region (~3 mGy), these values were 13% and 12%. A MOSFET calibration method was developed using a 100-mm pencil ion chamber and a CTDI phantom, accompanied by an absorbed dose error analysis reflecting multiple sources of measurement error. When using a single calibration factor, per tube potential, for different MOSFETs, only a small error was introduced into absorbed dose determinations, thus supporting the use of a single calibration factor for experiments involving many MOSFETs, such as those required to accurately estimate radiation effective dose. © 2017 American Association of Physicists in Medicine.

  10. Generalized Ordinary Differential Equation Models 1

    PubMed Central

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-01-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787

  11. High Astrometric Precision in the Calculation of the Coordinates of Orbiters in the GEO Ring

    NASA Astrophysics Data System (ADS)

    Lacruz, E.; Abad, C.; Downes, J. J.; Hernández-Pérez, F.; Casanova, D.; Tresaco, E.

    2018-04-01

    We present an astrometric method for the calculation of the positions of orbiters in the GEO ring with a high precision, through a rigorous astrometric treatment of observations with a 1-m class telescope, which are part of the CIDA survey of the GEO ring. We compute the distortion pattern to correct for the systematic errors introduced by the optics and electronics of the telescope, resulting in absolute mean errors of 0.16″ and 0.12″ in right ascension and declination, respectively. These correspond to ≍25 m at the mean distance of the GEO ring, and are thus good quality results.

  12. Generalized Ordinary Differential Equation Models.

    PubMed

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-10-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.

  13. Minimum constitutive relation error based static identification of beams using force method

    NASA Astrophysics Data System (ADS)

    Guo, Jia; Takewaki, Izuru

    2017-05-01

    A new static identification approach based on the minimum constitutive relation error (CRE) principle for beam structures is introduced. The exact stiffness and the exact bending moment are shown to make the CRE minimal for given displacements to beam damages. A two-step substitution algorithm—a force-method step for the bending moment and a constitutive-relation step for the stiffness—is developed and its convergence is rigorously derived. Identifiability is further discussed and the stiffness in the undeformed region is found to be unidentifiable. An extra set of static measurements is complemented to remedy the drawback. Convergence and robustness are finally verified through numerical examples.

  14. Formal Assurance Arguments: A Solution In Search of a Problem?

    NASA Technical Reports Server (NTRS)

    Graydon, Patrick J.

    2015-01-01

    An assurance case comprises evidence and argument showing how that evidence supports assurance claims (e.g., about safety or security). It is unsurprising that some computer scientists have proposed formalizing assurance arguments: most associate formality with rigor. But while engineers can sometimes prove that source code refines a formal specification, it is not clear that formalization will improve assurance arguments or that this benefit is worth its cost. For example, formalization might reduce the benefits of argumentation by limiting the audience to people who can read formal logic. In this paper, we present (1) a systematic survey of the literature surrounding formal assurance arguments, (2) an analysis of errors that formalism can help to eliminate, (3) a discussion of existing evidence, and (4) suggestions for experimental work to definitively answer the question.

  15. CORS BAADE-WESSELINK DISTANCE TO THE LMC NGC 1866 BLUE POPULOUS CLUSTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molinaro, R.; Ripepi, V.; Marconi, M.

    2012-03-20

    We used optical, near-infrared photometry, and radial velocity data for a sample of 11 Cepheids belonging to the young LMC blue populous cluster NGC 1866 to estimate their radii and distances on the basis of the CORS Baade-Wesselink method. This technique, based on an accurate calibration of surface brightness as a function of (U - B), (V - K) colors, allows us to estimate, simultaneously, the linear radius and the angular diameter of Cepheid variables, and consequently to derive their distance. A rigorous error estimate on radii and distances was derived by using Monte Carlo simulations. Our analysis gives amore » distance modulus for NGC 1866 of 18.51 {+-} 0.03 mag, which is in agreement with several independent results.« less

  16. Risky driving behavior among university students and staff in the Sultanate of Oman.

    PubMed

    Al Reesi, Hamed; Al Maniri, Abdullah; Plankermann, Kai; Al Hinai, Mustafa; Al Adawi, Samir; Davey, Jeremy; Freeman, James

    2013-09-01

    There is a well developed literature on research investigating the relationship between various driving behaviors and road crash involvement. However, this research has predominantly been conducted in developed economies dominated by western types of cultural environments. To date no research has been published that has empirically investigated this relationship within the context of the emerging economies such as Oman. The present study aims to investigate driving behavior as indexed in the driving behavior questionnaire (DBQ) among a group of Omani university students and staff. A convenience non-probability self-selection sampling approach was utilized with Omani university students and staff. A total of 1003 Omani students (n=632) and staff (n=371) participated in the survey. Factor analysis of the BDQ revealed four main factors that were errors, speeding violation, lapses and aggressive violation. In the multivariate logistic backward regression analysis, the following factors were identified as significant predictors of being involved in causing at least one crash: driving experience, history of offenses and two DBQ components, i.e., errors and aggressive violation. This study indicates that errors and aggressive violation of the traffic regulations as well as history of having traffic offenses are major risk factors for road traffic crashes among the sample. While previous international research has demonstrated that speeding is a primary cause of crashing, in the current context, the results indicate that an array of factors is associated with crashes. Further research using more rigorous methodology is warranted to inform the development of road safety countermeasures in Oman that improves overall Traffic Safety Culture. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  18. Variation in center of mass estimates for extant sauropsids and its importance for reconstructing inertial properties of extinct archosaurs.

    PubMed

    Allen, Vivian; Paxton, Heather; Hutchinson, John R

    2009-09-01

    Inertial properties of animal bodies and segments are critical input parameters for biomechanical analysis of standing and moving, and thus are important for paleobiological inquiries into the broader behaviors, ecology and evolution of extinct taxa such as dinosaurs. But how accurately can these be estimated? Computational modeling was used to estimate the inertial properties including mass, density, and center of mass (COM) for extant crocodiles (adult and juvenile Crocodylus johnstoni) and birds (Gallus gallus; junglefowl and broiler chickens), to identify the chief sources of variation and methodological errors, and their significance. High-resolution computed tomography scans were segmented into 3D objects and imported into inertial property estimation software that allowed for the examination of variable body segment densities (e.g., air spaces such as lungs, and deformable body outlines). Considerable biological variation of inertial properties was found within groups due to ontogenetic changes as well as evolutionary changes between chicken groups. COM positions shift in variable directions during ontogeny in different groups. Our method was repeatable and the resolution was sufficient for accurate estimations of mass and density in particular. However, we also found considerable potential methodological errors for COM related to (1) assumed body segment orientation, (2) what frames of reference are used to normalize COM for size-independent comparisons among animals, and (3) assumptions about tail shape. Methods and assumptions are suggested to minimize these errors in the future and thereby improve estimation of inertial properties for extant and extinct animals. In the best cases, 10%-15% errors in these estimates are unavoidable, but particularly for extinct taxa errors closer to 50% should be expected, and therefore, cautiously investigated. Nonetheless in the best cases these methods allow rigorous estimation of inertial properties. (c) 2009 Wiley-Liss, Inc.

  19. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    NASA Astrophysics Data System (ADS)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  20. Educational agenda for diagnostic error reduction

    PubMed Central

    Trowbridge, Robert L; Dhaliwal, Gurpreet; Cosby, Karen S

    2013-01-01

    Diagnostic errors are a major patient safety concern. Although the majority of diagnostic errors are partially attributable to cognitive mistakes, the most effective means of improving clinician cognition in order to achieve gains in diagnostic reliability are unclear. We propose a tripartite educational agenda for improving diagnostic performance among students, residents and practising physicians. This agenda includes strengthening the metacognitive abilities of clinicians, fostering intuitive reasoning and increasing awareness of the role of systems in the diagnostic process. The evidence supporting initiatives in each of these realms is reviewed and a course of future implementation and study is proposed. The barriers to designing and implementing this agenda are substantial and include limited evidence supporting these initiatives and the challenges of changing the practice patterns of practising physicians. Implementation will need to be accompanied by rigorous evaluation. PMID:23764435

  1. Improved mathematical and computational tools for modeling photon propagation in tissue

    NASA Astrophysics Data System (ADS)

    Calabro, Katherine Weaver

    Light interacts with biological tissue through two predominant mechanisms: scattering and absorption, which are sensitive to the size and density of cellular organelles, and to biochemical composition (ex. hemoglobin), respectively. During the progression of disease, tissues undergo a predictable set of changes in cell morphology and vascularization, which directly affect their scattering and absorption properties. Hence, quantification of these optical property differences can be used to identify the physiological biomarkers of disease with interest often focused on cancer. Diffuse reflectance spectroscopy is a diagnostic tool, wherein broadband visible light is transmitted through a fiber optic probe into a turbid medium, and after propagating through the sample, a fraction of the light is collected at the surface as reflectance. The measured reflectance spectrum can be analyzed with appropriate mathematical models to extract the optical properties of the tissue, and from these, a set of physiological properties. A number of models have been developed for this purpose using a variety of approaches -- from diffusion theory, to computational simulations, and empirical observations. However, these models are generally limited to narrow ranges of tissue and probe geometries. In this thesis, reflectance models were developed for a much wider range of measurement parameters, and influences such as the scattering phase function and probe design were investigated rigorously for the first time. The results provide a comprehensive understanding of the factors that influence reflectance, with novel insights that, in some cases, challenge current assumptions in the field. An improved Monte Carlo simulation program, designed to run on a graphics processing unit (GPU), was built to simulate the data used in the development of the reflectance models. Rigorous error analysis was performed to identify how inaccuracies in modeling assumptions can be expected to affect the accuracy of extracted optical property values from experimentally-acquired reflectance spectra. From this analysis, probe geometries that offer the best robustness against error in estimation of physiological properties from tissue, are presented. Finally, several in vivo studies demonstrating the use of reflectance spectroscopy for both research and clinical applications are presented.

  2. Quantum uncertainty switches on or off the error-disturbance tradeoff

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Xiang; Su, Zu-En; Zhu, Xuanmin; Wu, Shengjun; Chen, Zeng-Bing

    2016-06-01

    The indeterminacy of quantum mechanics was originally presented by Heisenberg through the tradeoff between the measuring error of the observable A and the consequential disturbance to the value of another observable B. This tradeoff now has become a popular interpretation of the uncertainty principle. However, the historic idea has never been exactly formulated previously and is recently called into question. A theory built upon operational and state-relevant definitions of error and disturbance is called for to rigorously reexamine the relationship. Here by putting forward such natural definitions, we demonstrate both theoretically and experimentally that there is no tradeoff if the outcome of measuring B is more uncertain than that of A. Otherwise, the tradeoff will be switched on and well characterized by the Jensen-Shannon divergence. Our results reveal the hidden effect of the uncertain nature possessed by the measured state, and conclude that the state-relevant relation between error and disturbance is not almosteverywhere a tradeoff as people usually believe.

  3. Review of rigorous coupled-wave analysis and of homogeneous effective medium approximations for high spatial-frequency surface-relief gratings

    NASA Technical Reports Server (NTRS)

    Glytsis, Elias N.; Brundrett, David L.; Gaylord, Thomas K.

    1993-01-01

    A review of the rigorous coupled-wave analysis as applied to the diffraction of electro-magnetic waves by gratings is presented. The analysis is valid for any polarization, angle of incidence, and conical diffraction. Cascaded and/or multiplexed gratings as well as material anisotropy can be incorporated under the same formalism. Small period rectangular groove gratings can also be modeled using approximately equivalent uniaxial homogeneous layers (effective media). The ordinary and extraordinary refractive indices of these layers depend on the gratings filling factor, the refractive indices of the substrate and superstrate, and the ratio of the freespace wavelength to grating period. Comparisons of the homogeneous effective medium approximations with the rigorous coupled-wave analysis are presented. Antireflection designs (single-layer or multilayer) using the effective medium models are presented and compared. These ultra-short period antireflection gratings can also be used to produce soft x-rays. Comparisons of the rigorous coupled-wave analysis with experimental results on soft x-ray generation by gratings are also included.

  4. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  5. Ar-Ar_Redux: rigorous error propagation of 40Ar/39Ar data, including covariances

    NASA Astrophysics Data System (ADS)

    Vermeesch, P.

    2015-12-01

    Rigorous data reduction and error propagation algorithms are needed to realise Earthtime's objective to improve the interlaboratory accuracy of 40Ar/39Ar dating to better than 1% and thereby facilitate the comparison and combination of the K-Ar and U-Pb chronometers. Ar-Ar_Redux is a new data reduction protocol and software program for 40Ar/39Ar geochronology which takes into account two previously underappreciated aspects of the method: 1. 40Ar/39Ar measurements are compositional dataIn its simplest form, the 40Ar/39Ar age equation can be written as: t = log(1+J [40Ar/39Ar-298.5636Ar/39Ar])/λ = log(1 + JR)/λ Where λ is the 40K decay constant and J is the irradiation parameter. The age t does not depend on the absolute abundances of the three argon isotopes but only on their relative ratios. Thus, the 36Ar, 39Ar and 40Ar abundances can be normalised to unity and plotted on a ternary diagram or 'simplex'. Argon isotopic data are therefore subject to the peculiar mathematics of 'compositional data', sensu Aitchison (1986, The Statistical Analysis of Compositional Data, Chapman & Hall). 2. Correlated errors are pervasive throughout the 40Ar/39Ar methodCurrent data reduction protocols for 40Ar/39Ar geochronology propagate the age uncertainty as follows: σ2(t) = [J2 σ2(R) + R2 σ2(J)] / [λ2 (1 + R J)], which implies zero covariance between R and J. In reality, however, significant error correlations are found in every step of the 40Ar/39Ar data acquisition and processing, in both single and multi collector instruments, during blank, interference and decay corrections, age calculation etc. Ar-Ar_Redux revisits every aspect of the 40Ar/39Ar method by casting the raw mass spectrometer data into a contingency table of logratios, which automatically keeps track of all covariances in a compositional context. Application of the method to real data reveals strong correlations (r2 of up to 0.9) between age measurements within a single irradiation batch. Propertly taking into account these correlations significantly improves the precision and accuracy of 40Ar/39Ar data, at no financial cost. A prototype version of Ar-Ar_Redux was written in R and is available from http://redux.london-geochron.com. A standalone GUI is under development.

  6. Quadratic Zeeman effect for hydrogen: A method for rigorous bound-state error estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonte, G.; Falsaperla, P.; Schiffrer, G.

    1990-06-01

    We present a variational method, based on direct minimization of energy, for the calculation of eigenvalues and eigenfunctions of a hydrogen atom in a strong uniform magnetic field in the framework of the nonrelativistic theory (quadratic Zeeman effect). Using semiparabolic coordinates and a harmonic-oscillator basis, we show that it is possible to give rigorous error estimates for both eigenvalues and eigenfunctions by applying some results of Kato (Proc. Phys. Soc. Jpn. 4, 334 (1949)). The method can be applied in this simple form only to the lowest level of given angular momentum and parity, but it is also possible tomore » apply it to any excited state by using the standard Rayleigh-Ritz diagonalization method. However, due to the particular basis, the method is expected to be more effective, the weaker the field and the smaller the excitation energy, while the results of Kato we have employed lead to good estimates only when the level spacing is not too small. We present a numerical application to the {ital m}{sup {ital p}}=0{sup +} ground state and the lowest {ital m}{sup {ital p}}=1{sup {minus}} excited state, giving results that are among the most accurate in the literature for magnetic fields up to about 10{sup 10} G.« less

  7. Statistical Models for Averaging of the Pump–Probe Traces: Example of Denoising in Terahertz Time-Domain Spectroscopy

    NASA Astrophysics Data System (ADS)

    Skorobogatiy, Maksim; Sadasivan, Jayesh; Guerboukha, Hichem

    2018-05-01

    In this paper, we first discuss the main types of noise in a typical pump-probe system, and then focus specifically on terahertz time domain spectroscopy (THz-TDS) setups. We then introduce four statistical models for the noisy pulses obtained in such systems, and detail rigorous mathematical algorithms to de-noise such traces, find the proper averages and characterise various types of experimental noise. Finally, we perform a comparative analysis of the performance, advantages and limitations of the algorithms by testing them on the experimental data collected using a particular THz-TDS system available in our laboratories. We conclude that using advanced statistical models for trace averaging results in the fitting errors that are significantly smaller than those obtained when only a simple statistical average is used.

  8. Blindness and Visual Impairment Profile and Rapid Assessment of Avoidable Blindness in South East Asia: Analysis of New Data. 2017 APAO Holmes Lecture.

    PubMed

    Das, Taraprasad

    2018-03-13

    The International Agency for Prevention of Blindness (IAPB) South East Asia region (SEAR) that consists of 11 countries contains 26% of the world's population (1,761,000,000). In this region 12 million are blind and 78.5 million are visually impaired. This amounts to 30% of global blindness and 32% of global visual impairment. Rapid assessment of avoidable blindness (RAAB) survey analysis. RAAB, either a repeat or a first time survey, was completed in 8 countries in this decade (2010 onwards). These include Bangladesh, Bhutan, India, Indonesia, Maldives, Sri Lanka, Thailand, and Timor Leste. Cataract is the principal cause of blindness and severe visual impairment in all countries. Refractive error is the principal cause of moderate visual impairment in 4 countries: Bangladesh, India, Maldives, and Sri Lanka; cataract continues to be the principal cause of moderate visual impairment in 4 other countries: Bhutan, Indonesia, Thailand, and Timor Leste. Outcome of cataract surgery is suboptimal in the Maldives and Timor Leste. Rigorous focus is necessary to improve cataract surgery outcomes and correction of refractive error without neglecting the quality of care. At the same time allowances must be made for care of the emerging causes of visual impairment and blindness such as glaucoma and posterior segment disorders, particularly diabetic retinopathy. Copyright 2018 Asia-Pacific Academy of Ophthalmology.

  9. Short-term memory capacity in networks via the restricted isometry property.

    PubMed

    Charles, Adam S; Yap, Han Lun; Rozell, Christopher J

    2014-06-01

    Cortical networks are hypothesized to rely on transient network activity to support short-term memory (STM). In this letter, we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous nonasymptotic recovery guarantees, quantifying the impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity. Our analysis demonstrates that network memory capacities can scale superlinearly with the number of nodes and in some situations can achieve STM capacities that are much larger than the network size. We provide perfect recovery guarantees for finite sequences and recovery bounds for infinite sequences. The latter analysis predicts that network STM systems may have an optimal recovery length that balances errors due to omission and recall mistakes. Furthermore, we show that the conditions yielding optimal STM capacity can be embodied in several network topologies, including networks with sparse or dense connectivities.

  10. A VLF-based technique in applications to digital control of nonlinear hybrid multirate systems

    NASA Astrophysics Data System (ADS)

    Vassilyev, Stanislav; Ulyanov, Sergey; Maksimkin, Nikolay

    2017-01-01

    In this paper, a technique for rigorous analysis and design of nonlinear multirate digital control systems on the basis of the reduction method and sublinear vector Lyapunov functions is proposed. The control system model under consideration incorporates continuous-time dynamics of the plant and discrete-time dynamics of the controller and takes into account uncertainties of the plant, bounded disturbances, nonlinear characteristics of sensors and actuators. We consider a class of multirate systems where the control update rate is slower than the measurement sampling rates and periodic non-uniform sampling is admitted. The proposed technique does not use the preliminary discretization of the system, and, hence, allows one to eliminate the errors associated with the discretization and improve the accuracy of analysis. The technique is applied to synthesis of digital controller for a flexible spacecraft in the fine stabilization mode and decentralized controller for a formation of autonomous underwater vehicles. Simulation results are provided to validate the good performance of the designed controllers.

  11. Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.

  12. Conic Sector Analysis of Hybrid Control Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thompson, P. M.

    1982-01-01

    A hybrid control system contains an analog plant and a hybrid (or sampled-data) compensator. In this thesis a new conic sector is determined which is constructive and can be used to: (1) determine closed loop stability, (2) analyze robustness with respect to modelling uncertainties, (3) analyze steady state response to commands, and (4) select the sample rate. The use of conic sectors allows the designer to treat hybrid control systems as though they were analog control systems. The center of the conic sector can be used as a rigorous linear time invariant approximation of the hybrid control system, and the radius places a bound on the errors of this approximation. The hybrid feedback system can be multivariable, and the sampler is assumed to be synchronous. Algorithms to compute the conic sector are presented. Several examples demonstrate how the conic sector analysis techniques are applied. Extensions to single loop multirate hybrid feedback systems are presented. Further extensions are proposed for multiloop multirate hybrid feedback system and for single rate systems with asynchronous sampling.

  13. Lithographic performance comparison with various RET for 45-nm node with hyper NA

    NASA Astrophysics Data System (ADS)

    Adachi, Takashi; Inazuki, Yuichi; Sutou, Takanori; Kitahata, Yasuhisa; Morikawa, Yasutaka; Toyama, Nobuhito; Mohri, Hiroshi; Hayashi, Naoya

    2006-05-01

    In order to realize 45 nm node lithography, strong resolution enhancement technology (RET) and water immersion will be needed. In this research, we discussed about various RET performance comparison for 45 nm node using 3D rigorous simulation. As a candidate, we chose binary mask (BIN), several kinds of attenuated phase-shifting mask (att-PSM) and chrome-less phase-shifting lithography mask (CPL). The printing performance was evaluated and compared for each RET options, after the optimizing illumination conditions, mask structure and optical proximity correction (OPC). The evaluation items of printing performance were CD-DOF, contrast-DOF, conventional ED-window and MEEF, etc. It's expected that effect of mask 3D topography becomes important at 45 nm node, so we argued about not only the case of ideal structures, but also the mask topography error effects. Several kinds of mask topography error were evaluated and we confirmed how these errors affect to printing performance.

  14. Design of an optimal preview controller for linear discrete-time descriptor systems with state delay

    NASA Astrophysics Data System (ADS)

    Cao, Mengjuan; Liao, Fucheng

    2015-04-01

    In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.

  15. Maelstrom Research guidelines for rigorous retrospective data harmonization

    PubMed Central

    Fortier, Isabel; Raina, Parminder; Van den Heuvel, Edwin R; Griffith, Lauren E; Craig, Camille; Saliba, Matilda; Doiron, Dany; Stolk, Ronald P; Knoppers, Bartha M; Ferretti, Vincent; Granda, Peter; Burton, Paul

    2017-01-01

    Abstract Background: It is widely accepted and acknowledged that data harmonization is crucial: in its absence, the co-analysis of major tranches of high quality extant data is liable to inefficiency or error. However, despite its widespread practice, no formalized/systematic guidelines exist to ensure high quality retrospective data harmonization. Methods: To better understand real-world harmonization practices and facilitate development of formal guidelines, three interrelated initiatives were undertaken between 2006 and 2015. They included a phone survey with 34 major international research initiatives, a series of workshops with experts, and case studies applying the proposed guidelines. Results: A wide range of projects use retrospective harmonization to support their research activities but even when appropriate approaches are used, the terminologies, procedures, technologies and methods adopted vary markedly. The generic guidelines outlined in this article delineate the essentials required and describe an interdependent step-by-step approach to harmonization: 0) define the research question, objectives and protocol; 1) assemble pre-existing knowledge and select studies; 2) define targeted variables and evaluate harmonization potential; 3) process data; 4) estimate quality of the harmonized dataset(s) generated; and 5) disseminate and preserve final harmonization products. Conclusions: This manuscript provides guidelines aiming to encourage rigorous and effective approaches to harmonization which are comprehensively and transparently documented and straightforward to interpret and implement. This can be seen as a key step towards implementing guiding principles analogous to those that are well recognised as being essential in securing the foundational underpinning of systematic reviews and the meta-analysis of clinical trials. PMID:27272186

  16. A Historical Survey of the Contributions of Francois-Joseph Servois to the Development of the Rigorous Calculus

    ERIC Educational Resources Information Center

    Petrilli, Salvatore John, Jr.

    2009-01-01

    Historians of mathematics considered the nineteenth century to be the Golden Age of mathematics. During this time period many areas of mathematics, such as algebra and geometry, were being placed on rigorous foundations. Another area of mathematics which experienced fundamental change was analysis. The drive for rigor in calculus began in 1797…

  17. Sources of medical error in refractive surgery.

    PubMed

    Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B

    2013-05-01

    To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.

  18. Fast synthesis of topographic mask effects based on rigorous solutions

    NASA Astrophysics Data System (ADS)

    Yan, Qiliang; Deng, Zhijie; Shiely, James

    2007-10-01

    Topographic mask effects can no longer be ignored at technology nodes of 45 nm, 32 nm and beyond. As feature sizes become comparable to the mask topographic dimensions and the exposure wavelength, the popular thin mask model breaks down, because the mask transmission no longer follows the layout. A reliable mask transmission function has to be derived from Maxwell equations. Unfortunately, rigorous solutions of Maxwell equations are only manageable for limited field sizes, but impractical for full-chip optical proximity corrections (OPC) due to the prohibitive runtime. Approximation algorithms are in demand to achieve a balance between acceptable computation time and tolerable errors. In this paper, a fast algorithm is proposed and demonstrated to model topographic mask effects for OPC applications. The ProGen Topographic Mask (POTOMAC) model synthesizes the mask transmission functions out of small-sized Maxwell solutions from a finite-difference-in-time-domain (FDTD) engine, an industry leading rigorous simulator of topographic mask effect from SOLID-E. The integral framework presents a seamless solution to the end user. Preliminary results indicate the overhead introduced by POTOMAC is contained within the same order of magnitude in comparison to the thin mask approach.

  19. Security of a discretely signaled continuous variable quantum key distribution protocol for high rate systems.

    PubMed

    Zhang, Zheshen; Voss, Paul L

    2009-07-06

    We propose a continuous variable based quantum key distribution protocol that makes use of discretely signaled coherent light and reverse error reconciliation. We present a rigorous security proof against collective attacks with realistic lossy, noisy quantum channels, imperfect detector efficiency, and detector electronic noise. This protocol is promising for convenient, high-speed operation at link distances up to 50 km with the use of post-selection.

  20. Simultaneous orbit determination

    NASA Technical Reports Server (NTRS)

    Wright, J. R.

    1988-01-01

    Simultaneous orbit determination is demonstrated using live range and Doppler data for the NASA/Goddard tracking configuration defined by the White Sands Ground Terminal (WSGT), the Tracking and Data Relay Satellite (TDRS), and the Earth Radiation Budget Satellite (ERBS). A physically connected sequential filter-smoother was developed for this demonstration. Rigorous necessary conditions are used to show that the state error covariance functions are realistic; and this enables the assessment of orbit estimation accuracies for both TDRS and ERBS.

  1. Maintaining rigor in research: flaws in a recent study and a reanalysis of the relationship between state abortion laws and maternal mortality in Mexico.

    PubMed

    Darney, Blair G; Saavedra-Avendano, Biani; Lozano, Rafael

    2017-01-01

    A recent publication [Koch E, Chireau M, Pliego F, Stanford J, Haddad S, Calhoun B, Aracena P, Bravo M, Gatica S, Thorp J. Abortion legislation, maternal healthcare, fertility, female literacy, sanitation, violence against women and maternal deaths: a natural experiment in 32 Mexican states. BMJ Open 2015;5(2):e006013] claimed that Mexican states with more restrictive abortion laws had lower levels of maternal mortality. Our objectives are to replicate the analysis, reanalyze the data and offer a critique of the key flaws of the Koch study. We used corrected maternal mortality data (2006-2013), live births, and state-level indicators of poverty. We replicate the published analysis. We then reclassified state-level exposure to abortion on demand based on actual availability of abortion (Mexico City versus the other 31 states) and test the association of abortion access and the maternal mortality ratio (MMR) using descriptives over time, pooled chi-square tests and regression models. We included 256 state-year observations. We did not find significant differences in MMR between Mexico City (MMR=49.1) and the 31 states (MMR=44.6; p=.44). Using Koch's classification of states, we replicated published differences of higher MMR where abortion is more available. We found a significant, negative association between MMR and availability of abortion in the same multivariable models as Koch, but using our state classification (beta=-22.49, 95% CI=-38.9; -5.99). State-level poverty remains highly correlated with MMR. Koch makes errors in methodology and interpretation, making false causal claims about abortion law and MMR. MMR is falling most rapidly in Mexico City, but our main study limitation is an inability to draw causal inference about abortion law or access and maternal mortality. We need rigorous evidence about the health impacts of increasing access to safe abortion worldwide. Transparency and integrity in research is crucial, as well as perhaps even more in politically contested topics such as abortion. Rigorous evidence about the health impacts of increasing access to safe abortion worldwide is needed. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Fatal overdoses involving hydromorphone and morphine among inpatients: a case series

    PubMed Central

    Lowe, Amanda; Hamilton, Michael; Greenall BScPhm MHSc, Julie; Ma, Jessica; Dhalla, Irfan; Persaud, Nav

    2017-01-01

    Background: Opioids have narrow therapeutic windows, and errors in ordering or administration can be fatal. The purpose of this study was to describe deaths involving hydromorphone and morphine, which have similar-sounding names, but different potencies. Methods: In this case series, we describe deaths of patients admitted to hospital or residents of long-term care facilities that involved hydromorphone and morphine. We searched for deaths referred to the Patient Safety Review Committee of the Office of the Chief Coroner for Ontario between 2007 and 2012, and subsequently reviewed by 2014. We reviewed each case to identify intervention points where errors could have been prevented. Results: We identified 8 cases involving decedents aged 19 to 91 years. The cases involved errors in prescribing, order processing and transcription, dispensing, administration and monitoring. For 7 of the 8 cases, there were multiple (2 or more) possible intervention points. Six cases may have been prevented by additional patient monitoring, and 5 cases involved dispensing errors. Interpretation: Opioid toxicity deaths in patients living in institutions can be prevented at multiple points in the prescribing and dispensing processes. Interventions aimed at preventing errors in hydromorphone and morphine prescribing, administration and patient monitoring should be implemented and rigorously evaluated. PMID:28401133

  3. Fatal overdoses involving hydromorphone and morphine among inpatients: a case series.

    PubMed

    Lowe, Amanda; Hamilton, Michael; Greenall BScPhm MHSc, Julie; Ma, Jessica; Dhalla, Irfan; Persaud, Nav

    2017-01-01

    Opioids have narrow therapeutic windows, and errors in ordering or administration can be fatal. The purpose of this study was to describe deaths involving hydromorphone and morphine, which have similar-sounding names, but different potencies. In this case series, we describe deaths of patients admitted to hospital or residents of long-term care facilities that involved hydromorphone and morphine. We searched for deaths referred to the Patient Safety Review Committee of the Office of the Chief Coroner for Ontario between 2007 and 2012, and subsequently reviewed by 2014. We reviewed each case to identify intervention points where errors could have been prevented. We identified 8 cases involving decedents aged 19 to 91 years. The cases involved errors in prescribing, order processing and transcription, dispensing, administration and monitoring. For 7 of the 8 cases, there were multiple (2 or more) possible intervention points. Six cases may have been prevented by additional patient monitoring, and 5 cases involved dispensing errors. Opioid toxicity deaths in patients living in institutions can be prevented at multiple points in the prescribing and dispensing processes. Interventions aimed at preventing errors in hydromorphone and morphine prescribing, administration and patient monitoring should be implemented and rigorously evaluated.

  4. Measurement uncertainty relations: characterising optimal error bounds for qubits

    NASA Astrophysics Data System (ADS)

    Bullock, T.; Busch, P.

    2018-07-01

    In standard formulations of the uncertainty principle, two fundamental features are typically cast as impossibility statements: two noncommuting observables cannot in general both be sharply defined (for the same state), nor can they be measured jointly. The pioneers of quantum mechanics were acutely aware and puzzled by this fact, and it motivated Heisenberg to seek a mitigation, which he formulated in his seminal paper of 1927. He provided intuitive arguments to show that the values of, say, the position and momentum of a particle can at least be unsharply defined, and they can be measured together provided some approximation errors are allowed. Only now, nine decades later, a working theory of approximate joint measurements is taking shape, leading to rigorous and experimentally testable formulations of associated error tradeoff relations. Here we briefly review this new development, explaining the concepts and steps taken in the construction of optimal joint approximations of pairs of incompatible observables. As a case study, we deduce measurement uncertainty relations for qubit observables using two distinct error measures. We provide an operational interpretation of the error bounds and discuss some of the first experimental tests of such relations.

  5. Accuracy of measurement of star images on a pixel array

    NASA Technical Reports Server (NTRS)

    King, I. R.

    1983-01-01

    Algorithms are developed for predicting the accuracy with which the brightness of a star can be determined from its image on a digital detector array, as a function of the brightness of the background. The assumption is made that a known profile is being fitted by least squares. The two profiles used correspond to ST images and to ground-based observations. The first result is an approximate rule of thumb for equivalent noise area. More rigorous results are then given in tabular form. The size of the pixels, relative to the image size, is taken into account. Astronometric accuracy is also discussed briefly; the error, relative to image size, is very similar to the photometric error relative to brightness.

  6. Adaptive characterization of recrystallization kinetics in IF steel by electron backscatter diffraction.

    PubMed

    Kim, Dong-Kyu; Park, Won-Woong; Lee, Ho Won; Kang, Seong-Hoon; Im, Yong-Taek

    2013-12-01

    In this study, a rigorous methodology for quantifying recrystallization kinetics by electron backscatter diffraction is proposed in order to reduce errors associated with the operator's skill. An adaptive criterion to determine adjustable grain orientation spread depending on the recrystallization stage is proposed to better identify the recrystallized grains in the partially recrystallized microstructure. The proposed method was applied in characterizing the microstructure evolution during annealing of interstitial-free steel cold rolled to low and high true strain levels of 0.7 and 1.6, respectively. The recrystallization kinetics determined by the proposed method was found to be consistent with the standard method of Vickers microhardness. The application of the proposed method to the overall recrystallization stages showed that it can be used for the rigorous characterization of progressive microstructure evolution, especially for the severely deformed material. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  7. Accurate Heart Rate Monitoring During Physical Exercises Using PPG.

    PubMed

    Temko, Andriy

    2017-09-01

    The challenging task of heart rate (HR) estimation from the photoplethysmographic (PPG) signal, during intensive physical exercises, is tackled in this paper. The study presents a detailed analysis of a novel algorithm (WFPV) that exploits a Wiener filter to attenuate the motion artifacts, a phase vocoder to refine the HR estimate and user-adaptive post-processing to track the subject physiology. Additionally, an offline version of the HR estimation algorithm that uses Viterbi decoding is designed for scenarios that do not require online HR monitoring (WFPV+VD). The performance of the HR estimation systems is rigorously compared with existing algorithms on the publically available database of 23 PPG recordings. On the whole dataset of 23 PPG recordings, the algorithms result in average absolute errors of 1.97 and 1.37 BPM in the online and offline modes, respectively. On the test dataset of 10 PPG recordings which were most corrupted with motion artifacts, WFPV has an error of 2.95 BPM on its own and 2.32 BPM in an ensemble with two existing algorithms. The error rate is significantly reduced when compared with the state-of-the art PPG-based HR estimation methods. The proposed system is shown to be accurate in the presence of strong motion artifacts and in contrast to existing alternatives has very few free parameters to tune. The algorithm has a low computational cost and can be used for fitness tracking and health monitoring in wearable devices. The MATLAB implementation of the algorithm is provided online.

  8. Uncertainty information in climate data records from Earth observation

    NASA Astrophysics Data System (ADS)

    Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang

    2017-07-01

    The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the error distribution, and the propagation of the uncertainty to the geophysical variable in the CDR accounting for its error correlation properties. Uncertainty estimates can and should be validated as part of CDR validation when possible. These principles are quite general, but the approach to providing uncertainty information appropriate to different ECVs is varied, as confirmed by a brief review across different ECVs in the CCI. User requirements for uncertainty information can conflict with each other, and a variety of solutions and compromises are possible. The concept of an ensemble CDR as a simple means of communicating rigorous uncertainty information to users is discussed. Our review concludes by providing eight concrete recommendations for good practice in providing and communicating uncertainty in EO-based climate data records.

  9. Reliable Real-Time Solution of Parametrized Partial Differential Equations: Reduced-Basis Output Bound Methods. Appendix 2

    NASA Technical Reports Server (NTRS)

    Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)

    2002-01-01

    We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.

  10. Automated inference procedure for the determination of cell growth parameters

    NASA Astrophysics Data System (ADS)

    Harris, Edouard A.; Koh, Eun Jee; Moffat, Jason; McMillen, David R.

    2016-01-01

    The growth rate and carrying capacity of a cell population are key to the characterization of the population's viability and to the quantification of its responses to perturbations such as drug treatments. Accurate estimation of these parameters necessitates careful analysis. Here, we present a rigorous mathematical approach for the robust analysis of cell count data, in which all the experimental stages of the cell counting process are investigated in detail with the machinery of Bayesian probability theory. We advance a flexible theoretical framework that permits accurate estimates of the growth parameters of cell populations and of the logical correlations between them. Moreover, our approach naturally produces an objective metric of avoidable experimental error, which may be tracked over time in a laboratory to detect instrumentation failures or lapses in protocol. We apply our method to the analysis of cell count data in the context of a logistic growth model by means of a user-friendly computer program that automates this analysis, and present some samples of its output. Finally, we note that a traditional least squares fit can provide misleading estimates of parameter values, because it ignores available information with regard to the way in which the data have actually been collected.

  11. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  12. Error identification in a high-volume clinical chemistry laboratory: Five-year experience.

    PubMed

    Jafri, Lena; Khan, Aysha Habib; Ghani, Farooq; Shakeel, Shahid; Raheem, Ahmed; Siddiqui, Imran

    2015-07-01

    Quality indicators for assessing the performance of a laboratory require a systematic and continuous approach in collecting and analyzing data. The aim of this study was to determine the frequency of errors utilizing the quality indicators in a clinical chemistry laboratory and to convert errors to the Sigma scale. Five-year quality indicator data of a clinical chemistry laboratory was evaluated to describe the frequency of errors. An 'error' was defined as a defect during the entire testing process from the time requisition was raised and phlebotomy was done until the result dispatch. An indicator with a Sigma value of 4 was considered good but a process for which the Sigma value was 5 (i.e. 99.977% error-free) was considered well controlled. In the five-year period, a total of 6,792,020 specimens were received in the laboratory. Among a total of 17,631,834 analyses, 15.5% were from within hospital. Total error rate was 0.45% and of all the quality indicators used in this study the average Sigma level was 5.2. Three indicators - visible hemolysis, failure of proficiency testing and delay in stat tests - were below 5 on the Sigma scale and highlight the need to rigorously monitor these processes. Using Six Sigma metrics quality in a clinical laboratory can be monitored more effectively and it can set benchmarks for improving efficiency.

  13. On a more rigorous gravity field processing for future LL-SST type gravity satellite missions

    NASA Astrophysics Data System (ADS)

    Daras, I.; Pail, R.; Murböck, M.

    2013-12-01

    In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  15. Interpretation of HCMM images: A regional study

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Potential users of HCMM data, especially those with only a cursory background in thermal remote sensing are familiarized with the kinds of information contained in the images that can be extracted with some reliability solely from inspection of such standard products as those generated at NASA/GSFC and now achieved in the National Space Science Data Center. Visual analysis of photoimagery is prone to various misimpressions and outright errors brought on by unawareness of the influence of physical factors as well as by sometimes misleading tonal patterns introduced during photoprocessing. The quantitative approach, which relies on computer processing of digital HCMM data, field measurements, and integration of rigorous mathematical models, can usually be used to identify, compensate for, or correct the contributions from at least some of the natural factors and those associated with photoprocessing. Color composite, day-IR, night-IR and visible images of California and Nevada are examined.

  16. The log-periodic-AR(1)-GARCH(1,1) model for financial crashes

    NASA Astrophysics Data System (ADS)

    Gazola, L.; Fernandes, C.; Pizzinga, A.; Riera, R.

    2008-02-01

    This paper intends to meet recent claims for the attainment of more rigorous statistical methodology within the econophysics literature. To this end, we consider an econometric approach to investigate the outcomes of the log-periodic model of price movements, which has been largely used to forecast financial crashes. In order to accomplish reliable statistical inference for unknown parameters, we incorporate an autoregressive dynamic and a conditional heteroskedasticity structure in the error term of the original model, yielding the log-periodic-AR(1)-GARCH(1,1) model. Both the original and the extended models are fitted to financial indices of U. S. market, namely S&P500 and NASDAQ. Our analysis reveal two main points: (i) the log-periodic-AR(1)-GARCH(1,1) model has residuals with better statistical properties and (ii) the estimation of the parameter concerning the time of the financial crash has been improved.

  17. Quality control and conduct of genome-wide association meta-analyses.

    PubMed

    Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Mägi, Reedik; Ferreira, Teresa; Fall, Tove; Graff, Mariaelisa; Justice, Anne E; Luan, Jian'an; Gustafsson, Stefan; Randall, Joshua C; Vedantam, Sailaja; Workalemahu, Tsegaselassie; Kilpeläinen, Tuomas O; Scherag, André; Esko, Tonu; Kutalik, Zoltán; Heid, Iris M; Loos, Ruth J F

    2014-05-01

    Rigorous organization and quality control (QC) are necessary to facilitate successful genome-wide association meta-analyses (GWAMAs) of statistics aggregated across multiple genome-wide association studies. This protocol provides guidelines for (i) organizational aspects of GWAMAs, and for (ii) QC at the study file level, the meta-level across studies and the meta-analysis output level. Real-world examples highlight issues experienced and solutions developed by the GIANT Consortium that has conducted meta-analyses including data from 125 studies comprising more than 330,000 individuals. We provide a general protocol for conducting GWAMAs and carrying out QC to minimize errors and to guarantee maximum use of the data. We also include details for the use of a powerful and flexible software package called EasyQC. Precise timings will be greatly influenced by consortium size. For consortia of comparable size to the GIANT Consortium, this protocol takes a minimum of about 10 months to complete.

  18. Standard representation and unified stability analysis for dynamic artificial neural network models.

    PubMed

    Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D

    2018-02-01

    An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.

  19. Formal verification of human-automation interaction

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Heymann, Michael

    2002-01-01

    This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The paper describes a systematic methodology for evaluating whether the interface provides the necessary information about the machine to enable the operator to perform a specified task successfully and unambiguously. It also addresses the adequacy of information provided to the user via training material (e.g., user manual) about the machine's behavior. The essentials of the methodology, which can be automated and applied to the verification of large systems, are illustrated by several examples and through a case study of pilot interaction with an autopilot aboard a modern commercial aircraft. The expected application of this methodology is an augmentation and enhancement, by formal verification, of human-automation interfaces.

  20. Quantitative validation of carbon-fiber laminate low velocity impact simulations

    DOE PAGES

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    2015-09-26

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  1. Quality control and conduct of genome-wide association meta-analyses

    PubMed Central

    Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Mägi, Reedik; Ferreira, Teresa; Fall, Tove; Graff, Mariaelisa; Justice, Anne E; Luan, Jian'an; Gustafsson, Stefan; Randall, Joshua C; Vedantam, Sailaja; Workalemahu, Tsegaselassie; Kilpeläinen, Tuomas O; Scherag, André; Esko, Tonu; Kutalik, Zoltán; Heid, Iris M; Loos, Ruth JF

    2014-01-01

    Rigorous organization and quality control (QC) are necessary to facilitate successful genome-wide association meta-analyses (GWAMAs) of statistics aggregated across multiple genome-wide association studies. This protocol provides guidelines for [1] organizational aspects of GWAMAs, and for [2] QC at the study file level, the meta-level across studies, and the meta-analysis output level. Real–world examples highlight issues experienced and solutions developed by the GIANT Consortium that has conducted meta-analyses including data from 125 studies comprising more than 330,000 individuals. We provide a general protocol for conducting GWAMAs and carrying out QC to minimize errors and to guarantee maximum use of the data. We also include details for use of a powerful and flexible software package called EasyQC. For consortia of comparable size to the GIANT consortium, the present protocol takes a minimum of about 10 months to complete. PMID:24762786

  2. New insights from cluster analysis methods for RNA secondary structure prediction

    PubMed Central

    Rogers, Emily; Heitsch, Christine

    2016-01-01

    A widening gap exists between the best practices for RNA secondary structure prediction developed by computational researchers and the methods used in practice by experimentalists. Minimum free energy (MFE) predictions, although broadly used, are outperformed by methods which sample from the Boltzmann distribution and data mine the results. In particular, moving beyond the single structure prediction paradigm yields substantial gains in accuracy. Furthermore, the largest improvements in accuracy and precision come from viewing secondary structures not at the base pair level but at lower granularity/higher abstraction. This suggests that random errors affecting precision and systematic ones affecting accuracy are both reduced by this “fuzzier” view of secondary structures. Thus experimentalists who are willing to adopt a more rigorous, multilayered approach to secondary structure prediction by iterating through these levels of granularity will be much better able to capture fundamental aspects of RNA base pairing. PMID:26971529

  3. SU-E-T-635: Process Mapping of Eye Plaque Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huynh, J; Kim, Y

    Purpose: To apply a risk-based assessment and analysis technique (AAPM TG 100) to eye plaque brachytherapy treatment of ocular melanoma. Methods: The role and responsibility of personnel involved in the eye plaque brachytherapy is defined for retinal specialist, radiation oncologist, nurse and medical physicist. The entire procedure was examined carefully. First, major processes were identified and then details for each major process were followed. Results: Seventy-one total potential modes were identified. Eight major processes (corresponding detailed number of modes) are patient consultation (2 modes), pretreatment tumor localization (11), treatment planning (13), seed ordering and calibration (10), eye plaque assembly (10),more » implantation (11), removal (11), and deconstruction (3), respectively. Half of the total modes (36 modes) are related to physicist while physicist is not involved in processes such as during the actual procedure of suturing and removing the plaque. Conclusion: Not only can failure modes arise from physicist-related procedures such as treatment planning and source activity calibration, but it can also exist in more clinical procedures by other medical staff. The improvement of the accurate communication for non-physicist-related clinical procedures could potentially be an approach to prevent human errors. More rigorous physics double check would reduce the error for physicist-related procedures. Eventually, based on this detailed process map, failure mode and effect analysis (FMEA) will identify top tiers of modes by ranking all possible modes with risk priority number (RPN). For those high risk modes, fault tree analysis (FTA) will provide possible preventive action plans.« less

  4. Rate-loss analysis of an efficient quantum repeater architecture

    NASA Astrophysics Data System (ADS)

    Guha, Saikat; Krovi, Hari; Fuchs, Christopher A.; Dutton, Zachary; Slater, Joshua A.; Simon, Christoph; Tittel, Wolfgang

    2015-08-01

    We analyze an entanglement-based quantum key distribution (QKD) architecture that uses a linear chain of quantum repeaters employing photon-pair sources, spectral-multiplexing, linear-optic Bell-state measurements, multimode quantum memories, and classical-only error correction. Assuming perfect sources, we find an exact expression for the secret-key rate, and an analytical description of how errors propagate through the repeater chain, as a function of various loss-and-noise parameters of the devices. We show via an explicit analytical calculation, which separately addresses the effects of the principle nonidealities, that this scheme achieves a secret-key rate that surpasses the Takeoka-Guha-Wilde bound—a recently found fundamental limit to the rate-vs-loss scaling achievable by any QKD protocol over a direct optical link—thereby providing one of the first rigorous proofs of the efficacy of a repeater protocol. We explicitly calculate the end-to-end shared noisy quantum state generated by the repeater chain, which could be useful for analyzing the performance of other non-QKD quantum protocols that require establishing long-distance entanglement. We evaluate that shared state's fidelity and the achievable entanglement-distillation rate, as a function of the number of repeater nodes, total range, and various loss-and-noise parameters of the system. We extend our theoretical analysis to encompass sources with nonzero two-pair-emission probability, using an efficient exact numerical evaluation of the quantum state propagation and measurements. We expect our results to spur formal rate-loss analysis of other repeater protocols and also to provide useful abstractions to seed analyses of quantum networks of complex topologies.

  5. Rigorous coupled wave analysis of acousto-optics with relativistic considerations.

    PubMed

    Xia, Guoqiang; Zheng, Weijian; Lei, Zhenggang; Zhang, Ruolan

    2015-09-01

    A relativistic analysis of acousto-optics is presented, and a rigorous coupled wave analysis is generalized for the diffraction of the acousto-optical effect. An acoustic wave generates a grating with temporally and spatially modulated permittivity, hindering direct applications of the rigorous coupled wave analysis for the acousto-optical effect. In a reference frame which moves with the acoustic wave, the grating is static, the medium moves, and the coupled wave equations for the static grating may be derived. Floquet's theorem is then applied to cast these equations into an eigenproblem. Using a Lorentz transformation, the electromagnetic fields in the grating region are transformed to the lab frame where the medium is at rest, and relativistic Doppler frequency shifts are introduced into various diffraction orders. In the lab frame, the boundary conditions are considered and the diffraction efficiencies of various orders are determined. This method is rigorous and general, and the plane waves in the resulting expansion satisfy the dispersion relation of the medium and are propagation modes. Properties of various Bragg diffractions are results, rather than preconditions, of this method. Simulations of an acousto-optical tunable filter made by paratellurite, TeO(2), are given as examples.

  6. Demonstration of Qubit Operations Below a Rigorous Fault Tolerance Threshold With Gate Set Tomography (Open Access, Publisher’s Version)

    DTIC Science & Technology

    2017-02-15

    Maunz2 Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone...information processors have been demonstrated experimentally using superconducting circuits1–3, electrons in semiconductors4–6, trapped atoms and...qubit quantum information processor has been realized14, and single- qubit gates have demonstrated randomized benchmarking (RB) infidelities as low as 10

  7. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  8. A rigorous computational approach to linear response

    NASA Astrophysics Data System (ADS)

    Bahsoun, Wael; Galatolo, Stefano; Nisoli, Isaia; Niu, Xiaolong

    2018-03-01

    We present a general setting in which the formula describing the linear response of the physical measure of a perturbed system can be obtained. In this general setting we obtain an algorithm to rigorously compute the linear response. We apply our results to expanding circle maps. In particular, we present examples where we compute, up to a pre-specified error in the L∞ -norm, the response of expanding circle maps under stochastic and deterministic perturbations. Moreover, we present an example where we compute, up to a pre-specified error in the L 1-norm, the response of the intermittent family at the boundary; i.e. when the unperturbed system is the doubling map. This work was mainly conducted during a visit of SG to Loughborough University. WB and SG would like to thank The Leverhulme Trust for supporting mutual research visits through the Network Grant IN-2014-021. SG thanks the Department of Mathematical Sciences at Loughborough University for hospitality. WB thanks Dipartimento di Matematica, Universita di Pisa. The research of SG and IN is partially supported by EU Marie-Curie IRSES ‘Brazilian-European partnership in Dynamical Systems’ (FP7-PEOPLE-2012-IRSES 318999 BREUDS). IN was partially supported by CNPq and FAPERJ. IN would like to thank the Department of Mathematics at Uppsala University and the support of the KAW grant 2013.0315.

  9. Technological characteristics of pre- and post-rigor deboned beef mixtures from Holstein steers and quality attributes of cooked beef sausage.

    PubMed

    Sukumaran, Anuraj T; Holtcamp, Alexander J; Campbell, Yan L; Burnett, Derris; Schilling, Mark W; Dinh, Thu T N

    2018-06-07

    The objective of this study was to determine the effects of deboning time (pre- and post-rigor), processing steps (grinding - GB; salting - SB; batter formulation - BB), and storage time on the quality of raw beef mixtures and vacuum-packaged cooked sausage, produced using a commercial formulation with 0.25% phosphate. The pH was greater in pre-rigor GB and SB than in post-rigor GB and SB (P < .001). However, deboning time had no effect on metmyoglobin reducing activity, cooking loss, and color of raw beef mixtures. Protein solubility of pre-rigor beef mixtures (124.26 mg/kg) was greater than that of post-rigor beef (113.93 mg/kg; P = .071). TBARS were increased in BB but decreased during vacuum storage of cooked sausage (P ≤ .018). Except for chewiness and saltiness being 52.9 N-mm and 0.3 points greater in post-rigor sausage (P = .040 and 0.054, respectively), texture profile analysis and trained panelists detected no difference in texture between pre- and post-rigor sausage. Published by Elsevier Ltd.

  10. Optical spectral analysis of ultra-weak photon emission from tissue culture and yeast cells

    NASA Astrophysics Data System (ADS)

    Nerudová, Michaela; Červinková, Kateřina; Hašek, Jiří; Cifra, Michal

    2015-01-01

    Optical spectral analysis of the ultra-weak photon emission (UPE) could be utilized for non-invasive diagnostic of state of biological systems and for elucidation of underlying mechanisms of UPE generation. Optical spectra of UPE from differentiated HL-60 cells and yeast cells (Saccharomyces cerevisiae) were investigated. Induced photon emission of neutrophil-like cells and spontaneous photon emission of yeast cells were measured using highly sensitive photomultiplier module Hamamatsu H7360-01 in a thermally regulated light-tight chamber. The respiratory burst of neutrophil-like HL-60 cells was induced with the PMA (phorbol 12-myristate, 13-acetate). PMA activates an assembly of NADPH oxidase, which induces a rapid formation of reactive oxygen species (ROS). Long-pass edge filters (wavelength 350, from 400 to 600 with 25 nm resolution and 650 nm) were used for optical spectral analysis. Propagation of error of indirect measurements and standard deviation were used to assess reliability of the measured spectra. Results indicate that the photon emission from both cell cultures is detectable in the six from eight examined wavelength ranges with different percentage distribution of cell suspensions, particularly 450-475, 475-500, 500-525, 525-550, 550-575 and 575-600 nm. The wavelength range of spectra from 450 to 550 nm coincides with the range of photon emission from triplet excited carbonyls (350-550 nm). The both cells cultures emitted photons in wavelength range from 550 to 600 nm but this range does not correspond with any known emitter. To summarize, we have demonstrated a clear difference in the UPE spectra between two organisms using rigorous methodology and error analysis.

  11. Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.

    PubMed

    Tellinghuisen, Joel

    2018-04-01

    Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Statistical inference with quantum measurements: methodologies for nitrogen vacancy centers in diamond

    NASA Astrophysics Data System (ADS)

    Hincks, Ian; Granade, Christopher; Cory, David G.

    2018-01-01

    The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.

  13. Elimination of single-beam substitution error in diffuse reflectance measurements using an integrating sphere.

    PubMed

    Vidovic, Luka; Majaron, Boris

    2014-02-01

    Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS). To account for the incident light spectrum, measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light. After replacing the white standard with the test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, such a substitution may alter the fluence rate inside the IS. This leads to distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of more complex experimental setups, the literature states that only approximate corrections of the SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical method for elimination of SBSE when using IS equipped with an additional reference port. Two additional measurements performed at this port enable a rigorous elimination of SBSE. Our experimental characterization of SBSE is replicated by theoretical derivation. This offers an alternative possibility of computational removal of SBSE based on advance characterization of a specific DRS setup. The influence of SBSE on quantitative analysis of DRS is illustrated in one application example.

  14. Review of reactor pressure vessel evaluation report for Yankee Rowe Nuclear Power Station (YAEC No. 1735)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.

    1992-03-01

    The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the primary acceptance criterion'' in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less

  15. Review of reactor pressure vessel evaluation report for Yankee Rowe Nuclear Power Station (YAEC No. 1735)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.

    1992-03-01

    The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the ``primary acceptance criterion`` in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less

  16. Pilot error in air carrier accidents: does age matter?

    PubMed

    Li, Guohua; Grabowski, Jurek G; Baker, Susan P; Rebok, George W

    2006-07-01

    The relationship between pilot age and safety performance has been the subject of research and controversy since the "Age 60 Rule" became effective in 1960. This study aimed to examine age-related differences in the prevalence and patterns of pilot error in air carrier accidents. Investigation reports from the National Transportation Safety Board for accidents involving Part 121 operations in the United States between 1983 and 2002 were reviewed to identify pilot error and other contributing factors. Accident circumstances and the presence and type of pilot error were analyzed in relation to pilot age using Chi-square tests. Of the 558 air carrier accidents studied, 25% resulted from turbulence, 21% from mechanical failure, 16% from taxiing events, 13% from loss of control at landing or takeoff, and 25% from other causes. Accidents involving older pilots were more likely to be caused by turbulence, whereas accidents involving younger pilots were more likely to be taxiing events. Pilot error was a contributing factor in 34%, 38%, 35%, and 34% of the accidents involving pilots ages 25-34 yr, 35-44 yr, 45-54 yr, and 55-59 yr, respectively (p = 0.87). The patterns of pilot error were similar across age groups. Overall, 26% of the pilot errors identified were inattentiveness, 22% flawed decisions, 22% mishandled aircraft kinetics, and 11% poor crew interactions. The prevalence and patterns of pilot error in air carrier accidents do not seem to change with pilot age. The lack of association between pilot age and error may be due to the "safe worker effect" resulting from the rigorous selection processes and certification standards for professional pilots.

  17. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alonso, Juan J.; Iaccarino, Gianluca

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effortmore » has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A solution to the long-time integration problem of spectral chaos approaches; 4. A rigorous methodology to account for aleatory and epistemic uncertainties, to emphasize the most important variables via dimension reduction and dimension-adaptive refinement, and to support fusion with experimental data using Bayesian inference; 5. The application of novel methodologies to time-dependent reliability studies in wind turbine applications including a number of efforts relating to the uncertainty quantification in vertical-axis wind turbine applications. In this report, we summarize all accomplishments in the project (during the time period specified) focusing on advances in UQ algorithms and deployment efforts to the wind turbine application area. Detailed publications in each of these areas have also been completed and are available from the respective conference proceedings and journals as detailed in a later section.« less

  18. A Formal Methods Approach to the Analysis of Mode Confusion

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Miller, Steven P.; Potts, James N.; Carreno, Victor A.

    2004-01-01

    The goal of the new NASA Aviation Safety Program (AvSP) is to reduce the civil aviation fatal accident rate by 80% in ten years and 90% in twenty years. This program is being driven by the accident data with a focus on the most recent history. Pilot error is the most commonly cited cause for fatal accidents (up to 70%) and obviously must be given major consideration in this program. While the greatest source of pilot error is the loss of situation awareness , mode confusion is increasingly becoming a major contributor as well. The January 30, 1995 issue of Aviation Week lists 184 incidents and accidents involving mode awareness including the Bangalore A320 crash 2/14/90, the Strasbourg A320 crash 1/20/92, the Mulhouse-Habsheim A320 crash 6/26/88, and the Toulouse A330 crash 6/30/94. These incidents and accidents reveal that pilots sometimes become confused about what the cockpit automation is doing. Consequently, human factors research is an obvious investment area. However, even a cursory look at the accident data reveals that the mode confusion problem is much deeper than just training deficiencies and a lack of human-oriented design. This is readily acknowledged by human factors experts. It seems that further progress in human factors must come through a deeper scrutiny of the internals of the automation. It is in this arena that formal methods can contribute. Formal methods refers to the use of techniques from logic and discrete mathematics in the specification, design, and verification of computer systems, both hardware and software. The fundamental goal of formal methods is to capture requirements, designs and implementations in a mathematically based model that can be analyzed in a rigorous manner. Research in formal methods is aimed at automating this analysis as much as possible. By capturing the internal behavior of a flight deck in a rigorous and detailed formal model, the dark corners of a design can be analyzed. This paper will explore how formal models and analyses can be used to help eliminate mode confusion from flight deck designs and at the same time increase our confidence in the safety of the implementation. The paper is based upon interim results from a new project involving NASA Langley and Rockwell Collins in applying formal methods to a realistic business jet Flight Guidance System (FGS).

  19. Proof of Heisenberg's error-disturbance relation.

    PubMed

    Busch, Paul; Lahti, Pekka; Werner, Reinhard F

    2013-10-18

    While the slogan "no measurement without disturbance" has established itself under the name of the Heisenberg effect in the consciousness of the scientifically interested public, a precise statement of this fundamental feature of the quantum world has remained elusive, and serious attempts at rigorous formulations of it as a consequence of quantum theory have led to seemingly conflicting preliminary results. Here we show that despite recent claims to the contrary [L. Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)], Heisenberg-type inequalities can be proven that describe a tradeoff between the precision of a position measurement and the necessary resulting disturbance of momentum (and vice versa). More generally, these inequalities are instances of an uncertainty relation for the imprecisions of any joint measurement of position and momentum. Measures of error and disturbance are here defined as figures of merit characteristic of measuring devices. As such they are state independent, each giving worst-case estimates across all states, in contrast to previous work that is concerned with the relationship between error and disturbance in an individual state.

  20. Single-case synthesis tools I: Comparing tools to evaluate SCD quality and rigor.

    PubMed

    Zimmerman, Kathleen N; Ledford, Jennifer R; Severini, Katherine E; Pustejovsky, James E; Barton, Erin E; Lloyd, Blair P

    2018-03-03

    Tools for evaluating the quality and rigor of single case research designs (SCD) are often used when conducting SCD syntheses. Preferred components include evaluations of design features related to the internal validity of SCD to obtain quality and/or rigor ratings. Three tools for evaluating the quality and rigor of SCD (Council for Exceptional Children, What Works Clearinghouse, and Single-Case Analysis and Design Framework) were compared to determine if conclusions regarding the effectiveness of antecedent sensory-based interventions for young children changed based on choice of quality evaluation tool. Evaluation of SCD quality differed across tools, suggesting selection of quality evaluation tools impacts evaluation findings. Suggestions for selecting an appropriate quality and rigor assessment tool are provided and across-tool conclusions are drawn regarding the quality and rigor of studies. Finally, authors provide guidance for using quality evaluations in conjunction with outcome analyses when conducting syntheses of interventions evaluated in the context of SCD. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Adaptive Neural Network Control for the Trajectory Tracking of the Furuta Pendulum.

    PubMed

    Moreno-Valenzuela, Javier; Aguilar-Avelar, Carlos; Puga-Guzman, Sergio A; Santibanez, Victor

    2016-12-01

    The purpose of this paper is to introduce a novel adaptive neural network-based control scheme for the Furuta pendulum, which is a two degree-of-freedom underactuated system. Adaptation laws for the input and output weights are also provided. The proposed controller is able to guarantee tracking of a reference signal for the arm while the pendulum remains in the upright position. The key aspect of the derivation of the controller is the definition of an output function that depends on the position and velocity errors. The internal and external dynamics are rigorously analyzed, thereby proving the uniform ultimate boundedness of the error trajectories. By using real-time experiments, the new scheme is compared with other control methodologies, therein demonstrating the improved performance of the proposed adaptive algorithm.

  2. Characterising bias in regulatory risk and decision analysis: An analysis of heuristics applied in health technology appraisal, chemicals regulation, and climate change governance.

    PubMed

    MacGillivray, Brian H

    2017-08-01

    In many environmental and public health domains, heuristic methods of risk and decision analysis must be relied upon, either because problem structures are ambiguous, reliable data is lacking, or decisions are urgent. This introduces an additional source of uncertainty beyond model and measurement error - uncertainty stemming from relying on inexact inference rules. Here we identify and analyse heuristics used to prioritise risk objects, to discriminate between signal and noise, to weight evidence, to construct models, to extrapolate beyond datasets, and to make policy. Some of these heuristics are based on causal generalisations, yet can misfire when these relationships are presumed rather than tested (e.g. surrogates in clinical trials). Others are conventions designed to confer stability to decision analysis, yet which may introduce serious error when applied ritualistically (e.g. significance testing). Some heuristics can be traced back to formal justifications, but only subject to strong assumptions that are often violated in practical applications. Heuristic decision rules (e.g. feasibility rules) in principle act as surrogates for utility maximisation or distributional concerns, yet in practice may neglect costs and benefits, be based on arbitrary thresholds, and be prone to gaming. We highlight the problem of rule-entrenchment, where analytical choices that are in principle contestable are arbitrarily fixed in practice, masking uncertainty and potentially introducing bias. Strategies for making risk and decision analysis more rigorous include: formalising the assumptions and scope conditions under which heuristics should be applied; testing rather than presuming their underlying empirical or theoretical justifications; using sensitivity analysis, simulations, multiple bias analysis, and deductive systems of inference (e.g. directed acyclic graphs) to characterise rule uncertainty and refine heuristics; adopting "recovery schemes" to correct for known biases; and basing decision rules on clearly articulated values and evidence, rather than convention. Copyright © 2017. Published by Elsevier Ltd.

  3. Monitoring muscle optical scattering properties during rigor mortis

    NASA Astrophysics Data System (ADS)

    Xia, J.; Ranasinghesagara, J.; Ku, C. W.; Yao, G.

    2007-09-01

    Sarcomere is the fundamental functional unit in skeletal muscle for force generation. In addition, sarcomere structure is also an important factor that affects the eating quality of muscle food, the meat. The sarcomere structure is altered significantly during rigor mortis, which is the critical stage involved in transforming muscle to meat. In this paper, we investigated optical scattering changes during the rigor process in Sternomandibularis muscles. The measured optical scattering parameters were analyzed along with the simultaneously measured passive tension, pH value, and histology analysis. We found that the temporal changes of optical scattering, passive tension, pH value and fiber microstructures were closely correlated during the rigor process. These results suggested that sarcomere structure changes during rigor mortis can be monitored and characterized by optical scattering, which may find practical applications in predicting meat quality.

  4. Invariant Tori in the Secular Motions of the Three-body Planetary Systems

    NASA Astrophysics Data System (ADS)

    Locatelli, Ugo; Giorgilli, Antonio

    We consider the problem of the applicability of KAM theorem to a realistic problem of three bodies. In the framework of the averaged dynamics over the fast angles for the Sun-Jupiter-Saturn system we can prove the perpetual stability of the orbit. The proof is based on semi-numerical algorithms requiring both explicit algebraic manipulations of series and analytical estimates. The proof is made rigorous by using interval arithmetics in order to control the numerical errors.

  5. A Renormalisation Group Method. V. A Single Renormalisation Group Step

    NASA Astrophysics Data System (ADS)

    Brydges, David C.; Slade, Gordon

    2015-05-01

    This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.

  6. Kerr Reservoir LANDSAT experiment analysis for November 1980

    NASA Technical Reports Server (NTRS)

    Lecroy, S. R.

    1982-01-01

    An experiment was conducted on the waters of Kerr Reservoir to determine if reliable algorithms could be developed that relate water quality parameters to remotely sensed data. LANDSAT radiance data was used in the analysis since it is readily available and covers the area of interest on a regular basis. By properly designing the experiment, many of the unwanted variations due to atmosphere, solar, and hydraulic changes were minimized. The algorithms developed were constrained to satisfy rigorous statistical criteria before they could be considered dependable in predicting water quality parameters. A complete mix of different types of algorithms using the LANDSAT bands was generated to provide a thorough understanding of the relationships among the data involved. The study demonstrated that for the ranges measured, the algorithms that satisfactorily represented the data are mostly linear and only require a maximum of one or two LANDSAT bands. Rationing techniques did not improve the results since the initial design of the experiment minimized the errors that this procedure is effective against. Good correlations were established for inorganic suspended solids, iron, turbidity, and secchi depth.

  7. A Database of Herbaceous Vegetation Responses to Elevated Atmospheric CO2 (NDP-073)

    DOE Data Explorer

    Jones, Michael H [The Ohio State Univ., Columbus, OH (United States); Curtis, Peter S [The Ohio State Univ., Columbus, OH (United States); Cushman, Robert M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Brenkert, Antoinette L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    1999-01-01

    To perform a statistically rigorous meta-analysis of research results on the response by herbaceous vegetation to increased atmospheric CO2 levels, a multiparameter database of responses was compiled from the published literature. Seventy-eight independent CO2-enrichment studies, covering 53 species and 26 response parameters, reported mean response, sample size, and variance of the response (either as standard deviation or standard error). An additional 43 studies, covering 25 species and 6 response parameters, did not report variances. This numeric data package accompanies the Carbon Dioxide Information Analysis Center's (CDIAC's) NDP- 072, which provides similar information for woody vegetation. This numeric data package contains a 30-field data set of CO2- exposure experiment responses by herbaceous plants (as both a flat ASCII file and a spreadsheet file), files listing the references to the CO2-exposure experiments and specific comments relevant to the data in the data sets, and this documentation file (which includes SAS and Fortran codes to read the ASCII data file; SAS is a registered trademark of the SAS Institute, Inc., Cary, North Carolina 27511).

  8. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    PubMed

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. A case of instantaneous rigor?

    PubMed

    Pirch, J; Schulz, Y; Klintschar, M

    2013-09-01

    The question of whether instantaneous rigor mortis (IR), the hypothetic sudden occurrence of stiffening of the muscles upon death, actually exists has been controversially debated over the last 150 years. While modern German forensic literature rejects this concept, the contemporary British literature is more willing to embrace it. We present the case of a young woman who suffered from diabetes and who was found dead in an upright standing position with back and shoulders leaned against a punchbag and a cupboard. Rigor mortis was fully established, livor mortis was strong and according to the position the body was found in. After autopsy and toxicological analysis, it was stated that death most probably occurred due to a ketoacidotic coma with markedly increased values of glucose and lactate in the cerebrospinal fluid as well as acetone in blood and urine. Whereas the position of the body is most unusual, a detailed analysis revealed that it is a stable position even without rigor mortis. Therefore, this case does not further support the controversial concept of IR.

  10. Addressing the impact of environmental uncertainty in plankton model calibration with a dedicated software system: the Marine Model Optimization Testbed (MarMOT 1.1 alpha)

    NASA Astrophysics Data System (ADS)

    Hemmings, J. C. P.; Challenor, P. G.

    2012-04-01

    A wide variety of different plankton system models have been coupled with ocean circulation models, with the aim of understanding and predicting aspects of environmental change. However, an ability to make reliable inferences about real-world processes from the model behaviour demands a quantitative understanding of model error that remains elusive. Assessment of coupled model output is inhibited by relatively limited observing system coverage of biogeochemical components. Any direct assessment of the plankton model is further inhibited by uncertainty in the physical state. Furthermore, comparative evaluation of plankton models on the basis of their design is inhibited by the sensitivity of their dynamics to many adjustable parameters. Parameter uncertainty has been widely addressed by calibrating models at data-rich ocean sites. However, relatively little attention has been given to quantifying uncertainty in the physical fields required by the plankton models at these sites, and tendencies in the biogeochemical properties due to the effects of horizontal processes are often neglected. Here we use model twin experiments, in which synthetic data are assimilated to estimate a system's known "true" parameters, to investigate the impact of error in a plankton model's environmental input data. The experiments are supported by a new software tool, the Marine Model Optimization Testbed, designed for rigorous analysis of plankton models in a multi-site 1-D framework. Simulated errors are derived from statistical characterizations of the mixed layer depth, the horizontal flux divergence tendencies of the biogeochemical tracers and the initial state. Plausible patterns of uncertainty in these data are shown to produce strong temporal and spatial variability in the expected simulation error variance over an annual cycle, indicating variation in the significance attributable to individual model-data differences. An inverse scheme using ensemble-based estimates of the simulation error variance to allow for this environment error performs well compared with weighting schemes used in previous calibration studies, giving improved estimates of the known parameters. The efficacy of the new scheme in real-world applications will depend on the quality of statistical characterizations of the input data. Practical approaches towards developing reliable characterizations are discussed.

  11. Sequential reconstruction of driving-forces from nonlinear nonstationary dynamics

    NASA Astrophysics Data System (ADS)

    Güntürkün, Ulaş

    2010-07-01

    This paper describes a functional analysis-based method for the estimation of driving-forces from nonlinear dynamic systems. The driving-forces account for the perturbation inputs induced by the external environment or the secular variations in the internal variables of the system. The proposed algorithm is applicable to the problems for which there is too little or no prior knowledge to build a rigorous mathematical model of the unknown dynamics. We derive the estimator conditioned on the differentiability of the unknown system’s mapping, and smoothness of the driving-force. The proposed algorithm is an adaptive sequential realization of the blind prediction error method, where the basic idea is to predict the observables, and retrieve the driving-force from the prediction error. Our realization of this idea is embodied by predicting the observables one-step into the future using a bank of echo state networks (ESN) in an online fashion, and then extracting the raw estimates from the prediction error and smoothing these estimates in two adaptive filtering stages. The adaptive nature of the algorithm enables to retrieve both slowly and rapidly varying driving-forces accurately, which are illustrated by simulations. Logistic and Moran-Ricker maps are studied in controlled experiments, exemplifying chaotic state and stochastic measurement models. The algorithm is also applied to the estimation of a driving-force from another nonlinear dynamic system that is stochastic in both state and measurement equations. The results are judged by the posterior Cramer-Rao lower bounds. The method is finally put into test on a real-world application; extracting sun’s magnetic flux from the sunspot time series.

  12. Ten questions you should consider before submitting an article to a scientific journal.

    PubMed

    Falcó-Pegueroles, A; Rodríguez-Martín, D

    Investigating involves not only knowing the research methods and designs; it involves knowing the strategies for disseminating and publishing the results in scientific journals. An investigation is considered complete when it is published and is disclosed to the scientific community. The publication of a manuscript is not simple, since it involves examination by a rigorous editorial process evaluator to ensure the scientific quality of the proposal. The objective of this article is to communicate to potential authors the main errors or deficiencies that typically and routinely explain the decision by the referees of scientific journals not to accept a scientific article. Based on the experience of the authors as referees of national and international journals in the field of nursing and health sciences, we have identified a total of 10 types or groups, which cover formulation errors, inconsistencies between different parts of the text, lack of structuring, imprecise language, information gaps, and the detection of relevant inaccuracies. The identification and analysis of these issues enables their prevention, and is of great use to future researchers in the dissemination of the results of their work to the scientific community. In short, the best publishing strategy is one that ensures the scientific quality of the work and spares no effort in avoiding the errors or deficiencies that referees routinely detect in the articles they evaluate. Copyright © 2017 Sociedad Española de Enfermería Intensiva y Unidades Coronarias (SEEIUC). Publicado por Elsevier España, S.L.U. All rights reserved.

  13. (U) An Analytic Study of Piezoelectric Ejecta Mass Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    2017-02-16

    We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less

  14. Explicit error bounds for the α-quasi-periodic Helmholtz problem.

    PubMed

    Lord, Natacha H; Mulholland, Anthony J

    2013-10-01

    This paper considers a finite element approach to modeling electromagnetic waves in a periodic diffraction grating. In particular, an a priori error estimate associated with the α-quasi-periodic transformation is derived. This involves the solution of the associated Helmholtz problem being written as a product of e(iαx) and an unknown function called the α-quasi-periodic solution. To begin with, the well-posedness of the continuous problem is examined using a variational formulation. The problem is then discretized, and a rigorous a priori error estimate, which guarantees the uniqueness of this approximate solution, is derived. In previous studies, the continuity of the Dirichlet-to-Neumann map has simply been assumed and the dependency of the regularity constant on the system parameters, such as the wavenumber, has not been shown. To address this deficiency, in this paper an explicit dependence on the wavenumber and the degree of the polynomial basis in the a priori error estimate is obtained. Since the finite element method is well known for dealing with any geometries, comparison of numerical results obtained using the α-quasi-periodic transformation with a lattice sum technique is then presented.

  15. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theoristsmore » alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less

  16. Towards a rigorous framework for studying 2-player continuous games.

    PubMed

    Shutters, Shade T

    2013-03-21

    The use of 2-player strategic games is one of the most common frameworks for studying the evolution of economic and social behavior. Games are typically played between two players, each given two choices that lie at the extremes of possible behavior (e.g. completely cooperate or completely defect). Recently there has been much interest in studying the outcome of games in which players may choose a strategy from the continuous interval between extremes, requiring the set of two possible choices be replaced by a single continuous equation. This has led to confusion and even errors in the classification of the game being played. The issue is described here specifically in relation to the continuous prisoners dilemma and the continuous snowdrift game. A case study is then presented demonstrating the misclassification that can result from the extension of discrete games into continuous space. The paper ends with a call for a more rigorous and clear framework for working with continuous games. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Diffraction-based overlay measurement on dedicated mark using rigorous modeling method

    NASA Astrophysics Data System (ADS)

    Lu, Hailiang; Wang, Fan; Zhang, Qingyun; Chen, Yonghui; Zhou, Chang

    2012-03-01

    Diffraction Based Overlay (DBO) is widely evaluated by numerous authors, results show DBO can provide better performance than Imaging Based Overlay (IBO). However, DBO has its own problems. As well known, Modeling based DBO (mDBO) faces challenges of low measurement sensitivity and crosstalk between various structure parameters, which may result in poor accuracy and precision. Meanwhile, main obstacle encountered by empirical DBO (eDBO) is that a few pads must be employed to gain sufficient information on overlay-induced diffraction signature variations, which consumes more wafer space and costs more measuring time. Also, eDBO may suffer from mark profile asymmetry caused by processes. In this paper, we propose an alternative DBO technology that employs a dedicated overlay mark and takes a rigorous modeling approach. This technology needs only two or three pads for each direction, which is economic and time saving. While overlay measurement error induced by mark profile asymmetry being reduced, this technology is expected to be as accurate and precise as scatterometry technologies.

  18. Robust source and mask optimization compensating for mask topography effects in computational lithography.

    PubMed

    Li, Jia; Lam, Edmund Y

    2014-04-21

    Mask topography effects need to be taken into consideration for a more accurate solution of source mask optimization (SMO) in advanced optical lithography. However, rigorous 3D mask models generally involve intensive computation and conventional SMO fails to manipulate the mask-induced undesired phase errors that degrade the usable depth of focus (uDOF) and process yield. In this work, an optimization approach incorporating pupil wavefront aberrations into SMO procedure is developed as an alternative to maximize the uDOF. We first design the pupil wavefront function by adding primary and secondary spherical aberrations through the coefficients of the Zernike polynomials, and then apply the conjugate gradient method to achieve an optimal source-mask pair under the condition of aberrated pupil. We also use a statistical model to determine the Zernike coefficients for the phase control and adjustment. Rigorous simulations of thick masks show that this approach provides compensation for mask topography effects by improving the pattern fidelity and increasing uDOF.

  19. Response to Ridgeway, Dunston, and Qian: On Methodological Rigor: Has Rigor Mortis Set In?

    ERIC Educational Resources Information Center

    Baldwin, R. Scott; Vaughn, Sharon

    1993-01-01

    Responds to an article in the same issue of the journal presenting a meta-analysis of reading research. Expresses concern that the authors' conclusions will promote a slavish adherence to a methodology and a rigidity of thought that reading researchers can ill afford. (RS)

  20. An Assessment of Cost Improvements in the NASA COTS - CRS Program and Implications for Future NASA Missions

    NASA Technical Reports Server (NTRS)

    Zapata, Edgar

    2017-01-01

    This review brings rigorous life cycle cost (LCC) analysis into discussions about COTS program costs. We gather publicly available cost data, review the data for credibility, check for consistency among sources, and rigorously define and analyze specific cost metrics.

  1. Systemic Planning: An Annotated Bibliography and Literature Guide. Exchange Bibliography No. 91.

    ERIC Educational Resources Information Center

    Catanese, Anthony James

    Systemic planning is an operational approach to using scientific rigor and qualitative judgment in a complementary manner. It integrates rigorous techniques and methods from systems analysis, cybernetics, decision theory, and work programing. The annotated reference sources in this bibliography include those works that have been most influential…

  2. Measurement uncertainty analysis techniques applied to PV performance measurements

    NASA Astrophysics Data System (ADS)

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  3. Validation of prostate-specific antigen laboratory values recorded in Surveillance, Epidemiology, and End Results registries.

    PubMed

    Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C

    2017-02-15

    Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  4. Data entry errors and design for model-based tight glycemic control in critical care.

    PubMed

    Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2012-01-01

    Tight glycemic control (TGC) has shown benefits but has been difficult to achieve consistently. Model-based methods and computerized protocols offer the opportunity to improve TGC quality but require human data entry, particularly of blood glucose (BG) values, which can be significantly prone to error. This study presents the design and optimization of data entry methods to minimize error for a computerized and model-based TGC method prior to pilot clinical trials. To minimize data entry error, two tests were carried out to optimize a method with errors less than the 5%-plus reported in other studies. Four initial methods were tested on 40 subjects in random order, and the best two were tested more rigorously on 34 subjects. The tests measured entry speed and accuracy. Errors were reported as corrected and uncorrected errors, with the sum comprising a total error rate. The first set of tests used randomly selected values, while the second set used the same values for all subjects to allow comparisons across users and direct assessment of the magnitude of errors. These research tests were approved by the University of Canterbury Ethics Committee. The final data entry method tested reduced errors to less than 1-2%, a 60-80% reduction from reported values. The magnitude of errors was clinically significant and was typically by 10.0 mmol/liter or an order of magnitude but only for extreme values of BG < 2.0 mmol/liter or BG > 15.0-20.0 mmol/liter, both of which could be easily corrected with automated checking of extreme values for safety. The data entry method selected significantly reduced data entry errors in the limited design tests presented, and is in use on a clinical pilot TGC study. The overall approach and testing methods are easily performed and generalizable to other applications and protocols. © 2012 Diabetes Technology Society.

  5. High and low rigor temperature effects on sheep meat tenderness and ageing.

    PubMed

    Devine, Carrick E; Payne, Steven R; Peachey, Bridget M; Lowe, Timothy E; Ingram, John R; Cook, Christian J

    2002-02-01

    Immediately after electrical stimulation, the paired m. longissimus thoracis et lumborum (LT) of 40 sheep were boned out and wrapped tightly with a polyethylene cling film. One of the paired LT's was chilled in 15°C air to reach a rigor mortis (rigor) temperature of 18°C and the other side was placed in a water bath at 35°C and achieved rigor at this temperature. Wrapping reduced rigor shortening and mimicked meat left on the carcass. After rigor, the meat was aged at 15°C for 0, 8, 26 and 72 h and then frozen. The frozen meat was cooked to 75°C in an 85°C water bath and shear force values obtained from a 1×1 cm cross-section. The shear force values of meat for 18 and 35°C rigor were similar at zero ageing, but as ageing progressed, the 18 rigor meat aged faster and became more tender than meat that went into rigor at 35°C (P<0.001). The mean sarcomere length values of meat samples for 18 and 35°C rigor at each ageing time were significantly different (P<0.001), the samples at 35°C being shorter. When the short sarcomere length values and corresponding shear force values were removed for further data analysis, the shear force values for the 35°C rigor were still significantly greater. Thus the toughness of 35°C meat was not a consequence of muscle shortening and appears to be due to both a faster rate of tenderisation and the meat tenderising to a greater extent at the lower temperature. The cook loss at 35°C rigor (30.5%) was greater than that at 18°C rigor (28.4%) (P<0.01) and the colour Hunter L values were higher at 35°C (P<0.01) compared with 18°C, but there were no significant differences in a or b values.

  6. Verification of Compartmental Epidemiological Models using Metamorphic Testing, Model Checking and Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Arvind; Steed, Chad A; Pullum, Laura L

    Compartmental models in epidemiology are widely used as a means to model disease spread mechanisms and understand how one can best control the disease in case an outbreak of a widespread epidemic occurs. However, a significant challenge within the community is in the development of approaches that can be used to rigorously verify and validate these models. In this paper, we present an approach to rigorously examine and verify the behavioral properties of compartmen- tal epidemiological models under several common modeling scenarios including birth/death rates and multi-host/pathogen species. Using metamorphic testing, a novel visualization tool and model checking, we buildmore » a workflow that provides insights into the functionality of compartmental epidemiological models. Our initial results indicate that metamorphic testing can be used to verify the implementation of these models and provide insights into special conditions where these mathematical models may fail. The visualization front-end allows the end-user to scan through a variety of parameters commonly used in these models to elucidate the conditions under which an epidemic can occur. Further, specifying these models using a process algebra allows one to automatically construct behavioral properties that can be rigorously verified using model checking. Taken together, our approach allows for detecting implementation errors as well as handling conditions under which compartmental epidemiological models may fail to provide insights into disease spread dynamics.« less

  7. Evaluation of a moderate resolution, satellite-based impervious surface map using an independent, high-resolution validation data set

    USGS Publications Warehouse

    Jones, J.W.; Jarnagin, T.

    2009-01-01

    Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.

  8. Well-Tempered Metadynamics: A Smoothly Converging and Tunable Free-Energy Method

    NASA Astrophysics Data System (ADS)

    Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele

    2008-01-01

    We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.

  9. Well-tempered metadynamics: a smoothly converging and tunable free-energy method.

    PubMed

    Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele

    2008-01-18

    We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.

  10. Supervised learning of probability distributions by neural networks

    NASA Technical Reports Server (NTRS)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  11. Systematic Error Mitigation for the PIXIE Instrument

    NASA Technical Reports Server (NTRS)

    Kogut, Alan; Fixsen, Dale J.; Nagler, Peter; Tucker, Gregory

    2016-01-01

    The Primordial Ination Explorer (PIXIE) uses a nulling Fourier Transform Spectrometer to measure the absoluteintensity and linear polarization of the cosmic microwave background and diuse astrophysical foregrounds.PIXIE will search for the signature of primordial ination and will characterize distortions from a blackbodyspectrum, both to precision of a few parts per billion. Rigorous control of potential instrumental eects isrequired to take advantage of the raw sensitivity. PIXIE employs a highly symmetric design using multipledierential nulling to reduce the instrumental signature to negligible levels. We discuss the systematic errorbudget and mitigation strategies for the PIXIE mission.

  12. Space radiator simulation system analysis

    NASA Technical Reports Server (NTRS)

    Black, W. Z.; Wulff, W.

    1972-01-01

    A transient heat transfer analysis was carried out on a space radiator heat rejection system exposed to an arbitrarily prescribed combination of aerodynamic heating, solar, albedo, and planetary radiation. A rigorous analysis was carried out for the radiation panel and tubes lying in one plane and an approximate analysis was used to extend the rigorous analysis to the case of a curved panel. The analysis permits the consideration of both gaseous and liquid coolant fluids, including liquid metals, under prescribed, time dependent inlet conditions. The analysis provided a method for predicting: (1) transient and steady-state, two dimensional temperature profiles, (2) local and total heat rejection rates, (3) coolant flow pressure in the flow channel, and (4) total system weight and protection layer thickness.

  13. The investigation of Martian dune fields using very high resolution photogrammetric measurements and time series analysis

    NASA Astrophysics Data System (ADS)

    Kim, J.; Park, M.; Baik, H. S.; Choi, Y.

    2016-12-01

    At the present time, arguments continue regarding the migration speeds of Martian dune fields and their correlation with atmospheric circulation. However, precisely measuring the spatial translation of Martian dunes has rarely conducted only a very few times Therefore, we developed a generic procedure to precisely measure the migration of dune fields with recently introduced 25-cm resolution High Resolution Imaging Science Experimen (HIRISE) employing a high-accuracy photogrammetric processor and sub-pixel image correlator. The processor was designed to trace estimated dune migration, albeit slight, over the Martian surface by 1) the introduction of very high resolution ortho images and stereo analysis based on hierarchical geodetic control for better initial point settings; 2) positioning error removal throughout the sensor model refinement with a non-rigorous bundle block adjustment, which makes possible the co-alignment of all images in a time series; and 3) improved sub-pixel co-registration algorithms using optical flow with a refinement stage conducted on a pyramidal grid processor and a blunder classifier. Moreover, volumetric changes of Martian dunes were additionally traced by means of stereo analysis and photoclinometry. The established algorithms have been tested using high-resolution HIRISE images over a large number of Martian dune fields covering whole Mars Global Dune Database. Migrations over well-known crater dune fields appeared to be almost static for the considerable temporal periods and were weakly correlated with wind directions estimated by the Mars Climate Database (Millour et al. 2015). Only over a few Martian dune fields, such as Kaiser crater, meaningful migration speeds (>1m/year) compared to phtotogrammetric error residual have been measured. Currently a technical improved processor to compensate error residual using time series observation is under developing and expected to produce the long term migration speed over Martian dune fields where constant HIRISE image acquisitions are available. ACKNOWLEDGEMENTS: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement Nr. 607379.

  14. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsoulakis, Markos

    2014-08-09

    Our two key accomplishments in the first three years were towards the development of, (1) a mathematically rigorous and at the same time computationally flexible framework for parallelization of Kinetic Monte Carlo methods, and its implementation on GPUs, and (2) spatial multilevel coarse-graining methods for Monte Carlo sampling and molecular simulation. A common underlying theme in both these lines of our work is the development of numerical methods which are at the same time both computationally efficient and reliable, the latter in the sense that they provide controlled-error approximations for coarse observables of the simulated molecular systems. Finally, our keymore » accomplishment in the last year of the grant is that we started developing (3) pathwise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of nonequilibrium extended (high-dimensional) systems. We discuss these three research directions in some detail below, along with the related publications.« less

  15. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  16. Foundations for Measuring Volume Rendering Quality

    NASA Technical Reports Server (NTRS)

    Williams, Peter L.; Uselton, Samuel P.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The goal of this paper is to provide a foundation for objectively comparing volume rendered images. The key elements of the foundation are: (1) a rigorous specification of all the parameters that need to be specified to define the conditions under which a volume rendered image is generated; (2) a methodology for difference classification, including a suite of functions or metrics to quantify and classify the difference between two volume rendered images that will support an analysis of the relative importance of particular differences. The results of this method can be used to study the changes caused by modifying particular parameter values, to compare and quantify changes between images of similar data sets rendered in the same way, and even to detect errors in the design, implementation or modification of a volume rendering system. If one has a benchmark image, for example one created by a high accuracy volume rendering system, the method can be used to evaluate the accuracy of a given image.

  17. A Database of Woody Vegetation Responses to Elevated Atmospheric CO2 (NDP-072)

    DOE Data Explorer

    Curtis, Peter S [The Ohio State Univ., Columbus, OH (United States); Cushman, Robert M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Brenkert, Antoinette L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    1999-01-01

    To perform a statistically rigorous meta-analysis of research results on the response by woody vegetation to increased atmospheric CO2 levels, a multiparameter database of responses was compiled. Eighty-four independent CO2-enrichment studies, covering 65 species and 35 response parameters, met the necessary criteria for inclusion in the database: reporting mean response, sample size, and variance of the response (either as standard deviation or standard error). Data were retrieved from the published literature and unpublished reports. This numeric data package contains a 29-field data set of CO2-exposure experiment responses by woody plants (as both a flat ASCII file and a spreadsheet file), files listing the references to the CO2-exposure experiments and specific comments relevant to the data in the data set, and this documentation file (which includes SAS and Fortran codes to read the ASCII data file; SAS is a registered trademark of the SAS Institute, Inc., Cary, North Carolina 27511).

  18. Applying formal methods and object-oriented analysis to existing flight software

    NASA Technical Reports Server (NTRS)

    Cheng, Betty H. C.; Auernheimer, Brent

    1993-01-01

    Correctness is paramount for safety-critical software control systems. Critical software failures in medical radiation treatment, communications, and defense are familiar to the public. The significant quantity of software malfunctions regularly reported to the software engineering community, the laws concerning liability, and a recent NRC Aeronautics and Space Engineering Board report additionally motivate the use of error-reducing and defect detection software development techniques. The benefits of formal methods in requirements driven software development ('forward engineering') is well documented. One advantage of rigorously engineering software is that formal notations are precise, verifiable, and facilitate automated processing. This paper describes the application of formal methods to reverse engineering, where formal specifications are developed for a portion of the shuttle on-orbit digital autopilot (DAP). Three objectives of the project were to: demonstrate the use of formal methods on a shuttle application, facilitate the incorporation and validation of new requirements for the system, and verify the safety-critical properties to be exhibited by the software.

  19. Contour metrology using critical dimension atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Orji, Ndubuisi G.; Dixson, Ronald G.; Vladár, András E.; Ming, Bin; Postek, Michael T.

    2012-03-01

    The critical dimension atomic force microscope (CD-AFM), which is used as a reference instrument in lithography metrology, has been proposed as a complementary instrument for contour measurement and verification. Although data from CD-AFM is inherently three dimensional, the planar two-dimensional data required for contour metrology is not easily extracted from the top-down CD-AFM data. This is largely due to the limitations of the CD-AFM method for controlling the tip position and scanning. We describe scanning techniques and profile extraction methods to obtain contours from CD-AFM data. We also describe how we validated our technique, and explain some of its limitations. Potential sources of error for this approach are described, and a rigorous uncertainty model is presented. Our objective is to show which data acquisition and analysis methods could yield optimum contour information while preserving some of the strengths of CD-AFM metrology. We present comparison of contours extracted using our technique to those obtained from the scanning electron microscope (SEM), and the helium ion microscope (HIM).

  20. Robust output feedback stabilization for a flexible marine riser system.

    PubMed

    Zhao, Zhijia; Liu, Yu; Guo, Fang

    2017-12-06

    The aim of this paper is to develop a boundary control for the vibration reduction of a flexible marine riser system in the presence of parametric uncertainties and system states obtained inaccurately. To this end, an adaptive output feedback boundary control is proposed to suppress the riser's vibration fusing with observer-based backstepping, high-gain observers and robust adaptive control theory. In addition, the parameter adaptive laws are designed to compensate for the system parametric uncertainties, and the disturbance observer is introduced to mitigate the effects of external environmental disturbance. The uniformly bounded stability of the closed-loop system is achieved through rigorous Lyapunov analysis without any discretisation or simplification of the dynamics in the time and space, and the state observer error is ensured to exponentially converge to zero as time grows to infinity. In the end, the simulation and comparison studies are carried out to illustrate the performance of the proposed control under the proper choice of the design parameters. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Uncertainty Propagation in OMFIT

    NASA Astrophysics Data System (ADS)

    Smith, Sterling; Meneghini, Orso; Sung, Choongki

    2017-10-01

    A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.

  2. Bayesian adaptive bandit-based designs using the Gittins index for multi-armed trials with normally distributed endpoints.

    PubMed

    Smith, Adam L; Villar, Sofía S

    2018-01-01

    Adaptive designs for multi-armed clinical trials have become increasingly popular recently because of their potential to shorten development times and to increase patient response. However, developing response-adaptive designs that offer patient-benefit while ensuring the resulting trial provides a statistically rigorous and unbiased comparison of the different treatments included is highly challenging. In this paper, the theory of Multi-Armed Bandit Problems is used to define near optimal adaptive designs in the context of a clinical trial with a normally distributed endpoint with known variance. We report the operating characteristics (type I error, power, bias) and patient-benefit of these approaches and alternative designs using simulation studies based on an ongoing trial. These results are then compared to those recently published in the context of Bernoulli endpoints. Many limitations and advantages are similar in both cases but there are also important differences, specially with respect to type I error control. This paper proposes a simulation-based testing procedure to correct for the observed type I error inflation that bandit-based and adaptive rules can induce.

  3. Reflective properties of randomly rough surfaces under large incidence angles.

    PubMed

    Qiu, J; Zhang, W J; Liu, L H; Hsu, P-f; Liu, L J

    2014-06-01

    The reflective properties of randomly rough surfaces at large incidence angles have been reported due to their potential applications in some of the radiative heat transfer research areas. The main purpose of this work is to investigate the formation mechanism of the specular reflection peak of rough surfaces at large incidence angles. The bidirectional reflectance distribution function (BRDF) of rough aluminum surfaces with different roughnesses at different incident angles is measured by a three-axis automated scatterometer. This study used a validated and accurate computational model, the rigorous coupled-wave analysis (RCWA) method, to compare and analyze the measurement BRDF results. It is found that the RCWA results show the same trend of specular peak as the measurement. This paper mainly focuses on the relative roughness at the range of 0.16<σ/λ<5.35. As the relative roughness decreases, the specular peak enhancement dramatically increases and the scattering region significantly reduces, especially under large incidence angles. The RCWA and the Rayleigh criterion results have been compared, showing that the relative error of the total integrated scatter increases as the roughness of the surface increases at large incidence angles. In addition, the zero-order diffractive power calculated by RCWA and the reflectance calculated by Fresnel equations are compared. The comparison shows that the relative error declines sharply when the incident angle is large and the roughness is small.

  4. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  5. Towards rigorous analysis of the Levitov-Mirlin-Evers recursion

    NASA Astrophysics Data System (ADS)

    Fyodorov, Y. V.; Kupiainen, A.; Webb, C.

    2016-12-01

    This paper aims to develop a rigorous asymptotic analysis of an approximate renormalization group recursion for inverse participation ratios P q of critical powerlaw random band matrices. The recursion goes back to the work by Mirlin and Evers (2000 Phys. Rev. B 62 7920) and earlier works by Levitov (1990 Phys. Rev. Lett. 64 547, 1999 Ann. Phys. 8 697-706) and is aimed to describe the ensuing multifractality of the eigenvectors of such matrices. We point out both similarities and dissimilarities between the LME recursion and those appearing in the theory of multiplicative cascades and branching random walks and show that the methods developed in those fields can be adapted to the present case. In particular the LME recursion is shown to exhibit a phase transition, which we expect is a freezing transition, where the role of temperature is played by the exponent q. However, the LME recursion has features that make its rigorous analysis considerably harder and we point out several open problems for further study.

  6. A broadband variable-temperature test system for complex permittivity measurements of solid and powder materials

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Li, En; Zhang, Jing; Yu, Chengyong; Zheng, Hu; Guo, Gaofeng

    2018-02-01

    A microwave test system to measure the complex permittivity of solid and powder materials as a function of temperature has been developed. The system is based on a TM0n0 multi-mode cylindrical cavity with a slotting structure, which provides purer test modes compared to a traditional cavity. To ensure the safety, effectiveness, and longevity, heating and testing are carried out separately and the sample can move between two functional areas through an Alundum tube. Induction heating and a pneumatic platform are employed to, respectively, shorten the heating and cooling time of the sample. The single trigger function of the vector network analyzer is added to test software to suppress the drift of the resonance peak during testing. Complex permittivity is calculated by the rigorous field theoretical solution considering multilayer media loading. The variation of the cavity equivalent radius caused by the sample insertion holes is discussed in detail, and its influence to the test result is analyzed. The calibration method for the complex permittivity of the Alundum tube and quartz vial (for loading powder sample), which vary with the temperature, is given. The feasibility of the system has been verified by measuring different samples in a wide range of relative permittivity and loss tangent, and variable-temperature test results of fused quartz and SiO2 powder up to 1500 °C are compared with published data. The results indicate that the presented system is reliable and accurate. The stability of the system is verified by repeated and long-term tests, and error analysis is presented to estimate the error incurred due to the uncertainties in different error sources.

  7. Large-scale compensation of errors in pairwise-additive empirical force fields: comparison of AMBER intermolecular terms with rigorous DFT-SAPT calculations.

    PubMed

    Zgarbová, Marie; Otyepka, Michal; Sponer, Jirí; Hobza, Pavel; Jurecka, Petr

    2010-09-21

    The intermolecular interaction energy components for several molecular complexes were calculated using force fields available in the AMBER suite of programs and compared with Density Functional Theory-Symmetry Adapted Perturbation Theory (DFT-SAPT) values. The extent to which such comparison is meaningful is discussed. The comparability is shown to depend strongly on the intermolecular distance, which means that comparisons made at one distance only are of limited value. At large distances the coulombic and van der Waals 1/r(6) empirical terms correspond fairly well with the DFT-SAPT electrostatics and dispersion terms, respectively. At the onset of electronic overlap the empirical values deviate from the reference values considerably. However, the errors in the force fields tend to cancel out in a systematic manner at equilibrium distances. Thus, the overall performance of the force fields displays errors an order of magnitude smaller than those of the individual interaction energy components. The repulsive 1/r(12) component of the van der Waals expression seems to be responsible for a significant part of the deviation of the force field results from the reference values. We suggest that further improvement of the force fields for intermolecular interactions would require replacement of the nonphysical 1/r(12) term by an exponential function. Dispersion anisotropy and its effects are discussed. Our analysis is intended to show that although comparing the empirical and non-empirical interaction energy components is in general problematic, it might bring insights useful for the construction of new force fields. Our results are relevant to often performed force-field-based interaction energy decompositions.

  8. Internal consistency tests for evaluation of measurements of anthropogenic hydrocarbons in the troposphere

    NASA Astrophysics Data System (ADS)

    Parrish, D. D.; Trainer, M.; Young, V.; Goldan, P. D.; Kuster, W. C.; Jobson, B. T.; Fehsenfeld, F. C.; Lonneman, W. A.; Zika, R. D.; Farmer, C. T.; Riemer, D. D.; Rodgers, M. O.

    1998-09-01

    Measurements of tropospheric nonmethane hydrocarbons (NMHCs) made in continental North America should exhibit a common pattern determined by photochemical removal and dilution acting upon the typical North American urban emissions. We analyze 11 data sets collected in the United States in the context of this hypothesis, in most cases by analyzing the geometric mean and standard deviations of ratios of selected NMHCs. In the analysis we attribute deviations from the common pattern to plausible systematic and random experimental errors. In some cases the errors have been independently verified and the specific causes identified. Thus this common pattern provides a check for internal consistency in NMHC data sets. Specific tests are presented which should provide useful diagnostics for all data sets of anthropogenic NMHC measurements collected in the United States. Similar tests, based upon the perhaps different emission patterns of other regions, presumably could be developed. The specific tests include (1) a lower limit for ethane concentrations, (2) specific NMHCs that should be detected if any are, (3) the relatively constant mean ratios of the longer-lived NMHCs with similar atmospheric lifetimes, (4) the constant relative patterns of families of NMHCs, and (5) limits on the ambient variability of the NMHC ratios. Many experimental problems are identified in the literature and the Southern Oxidant Study data sets. The most important conclusion of this paper is that a rigorous field intercomparison of simultaneous measurements of ambient NMHCs by different techniques and researchers is of crucial importance to the field of atmospheric chemistry. The tests presented here are suggestive of errors but are not definitive; only a field intercomparison can resolve the uncertainties.

  9. Rigorous evaluation of chemical measurement uncertainty: liquid chromatographic analysis methods using detector response factor calibration

    NASA Astrophysics Data System (ADS)

    Toman, Blaza; Nelson, Michael A.; Bedner, Mary

    2017-06-01

    Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).

  10. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    NASA Astrophysics Data System (ADS)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  11. Optical proximity correction for anamorphic extreme ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas

    2017-10-01

    The change from isomorphic to anamorphic optics in high numerical aperture extreme ultraviolet scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking. OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs that are more tolerant to mask errors.

  12. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  14. Performance Evaluation of 14 Neural Network Architectures Used for Predicting Heat Transfer Characteristics of Engine Oils

    NASA Astrophysics Data System (ADS)

    Al-Ajmi, R. M.; Abou-Ziyan, H. Z.; Mahmoud, M. A.

    2012-01-01

    This paper reports the results of a comprehensive study that aimed at identifying best neural network architecture and parameters to predict subcooled boiling characteristics of engine oils. A total of 57 different neural networks (NNs) that were derived from 14 different NN architectures were evaluated for four different prediction cases. The NNs were trained on experimental datasets performed on five engine oils of different chemical compositions. The performance of each NN was evaluated using a rigorous statistical analysis as well as careful examination of smoothness of predicted boiling curves. One NN, out of the 57 evaluated, correctly predicted the boiling curves for all cases considered either for individual oils or for all oils taken together. It was found that the pattern selection and weight update techniques strongly affect the performance of the NNs. It was also revealed that the use of descriptive statistical analysis such as R2, mean error, standard deviation, and T and slope tests, is a necessary but not sufficient condition for evaluating NN performance. The performance criteria should also include inspection of the smoothness of the predicted curves either visually or by plotting the slopes of these curves.

  15. Critical Analysis of Strategies for Determining Rigor in Qualitative Inquiry.

    PubMed

    Morse, Janice M

    2015-09-01

    Criteria for determining the trustworthiness of qualitative research were introduced by Guba and Lincoln in the 1980s when they replaced terminology for achieving rigor, reliability, validity, and generalizability with dependability, credibility, and transferability. Strategies for achieving trustworthiness were also introduced. This landmark contribution to qualitative research remains in use today, with only minor modifications in format. Despite the significance of this contribution over the past four decades, the strategies recommended to achieve trustworthiness have not been critically examined. Recommendations for where, why, and how to use these strategies have not been developed, and how well they achieve their intended goal has not been examined. We do not know, for example, what impact these strategies have on the completed research. In this article, I critique these strategies. I recommend that qualitative researchers return to the terminology of social sciences, using rigor, reliability, validity, and generalizability. I then make recommendations for the appropriate use of the strategies recommended to achieve rigor: prolonged engagement, persistent observation, and thick, rich description; inter-rater reliability, negative case analysis; peer review or debriefing; clarifying researcher bias; member checking; external audits; and triangulation. © The Author(s) 2015.

  16. System Related Interventions to Reduce Diagnostic Error: A Narrative Review

    PubMed Central

    Singh, Hardeep; Graber, Mark L.; Kissam, Stephanie M.; Sorensen, Asta V.; Lenfestey, Nancy F.; Tant, Elizabeth M.; Henriksen, Kerm; LaBresh, Kenneth A.

    2013-01-01

    Background Diagnostic errors (missed, delayed, or wrong diagnosis) have gained recent attention and are associated with significant preventable morbidity and mortality. We reviewed the recent literature to identify interventions that have been, or could be, implemented to address systems-related factors that contribute directly to diagnostic error. Methods We conducted a comprehensive search using multiple search strategies. We first identified candidate articles in English between 2000 and 2009 from a PubMed search that exclusively evaluated for articles related to diagnostic error or delay. We then sought additional papers from references in the initial dataset, searches of additional databases, and subject matter experts. Articles were included if they formally evaluated an intervention to prevent or reduce diagnostic error; however, we also included papers if interventions were suggested and not tested in order to inform the state-of-the science on the topic. We categorized interventions according to the step in the diagnostic process they targeted: patient-provider encounter, performance and interpretation of diagnostic tests, follow-up and tracking of diagnostic information, subspecialty and referral-related; and patient-specific. Results We identified 43 articles for full review, of which 6 reported tested interventions and 37 contained suggestions for possible interventions. Empirical studies, though somewhat positive, were non-experimental or quasi-experimental and included a small number of clinicians or health care sites. Outcome measures in general were underdeveloped and varied markedly between studies, depending on the setting or step in the diagnostic process involved. Conclusions Despite a number of suggested interventions in the literature, few empirical studies have tested interventions to reduce diagnostic error in the last decade. Advancing the science of diagnostic error prevention will require more robust study designs and rigorous definitions of diagnostic processes and outcomes to measure intervention effects. PMID:22129930

  17. Cameras and settings for optimal image capture from UAVs

    NASA Astrophysics Data System (ADS)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  18. Trans-dimensional and hierarchical Bayesian approaches toward rigorous estimation of seismic sources and structures in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean

    2016-04-01

    A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.

  19. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  20. The ILRS Contribution to ITRF2013

    NASA Astrophysics Data System (ADS)

    Pavlis, Erricos C.; Luceri, Cinzia; Sciarretta, Cecilia; Evans, Keith

    2014-05-01

    Satellite Laser Ranging (SLR) data have contributed to the definition of the International Terrestrial Reference Frame (ITRF) over the past three decades. The development of ITRF2005 ushered a new era with the use of weekly or session contributions, allowing greater flexibility in the editing, relative weighting and the combination of information from the four contributing techniques. The new approach allows each Service to generate a solution based on the rigorous combination of the individual Analysis Centers' contributions that provides an opportunity to verify the intra-technique consistency and a comparison of internal procedures and adopted models. The intra- and inter-technique comparisons that the time series approach facilitates are an extremely powerful diagnostic that highlights differences and inconsistencies at the single station level. Over the past year the ILRS Analysis Working Group (AWG) worked on designing an improved ILRS contribution for the development of ITRF2013. The ILRS approach is based on the current IERS Conventions 2010 and our internal ILRS standards, with a few deviations that are documented. Since the Global Geodetic Observing System - GGOS identified the ITRF as its key project, the ILRS has taken a two-pronged approach in order to meet its stringent goals: modernizing the engineering components (ground and space segments), and revising the modeling standards taking advantage of recent improvements in system Earth modeling. The main concern in the case of SLR is monitoring systematic errors at individual stations, accounting for undocumented discontinuities, and improving the target signature models. The latter has been addressed with the adoption of mm-level models for all of our targets. As far as the station systematics, the AWG had already embarked on a major effort to improve the handling of such errors prior to the development of ITRF2008. The results of that effort formed the foundation for the re-examination of the systematic errors at all sites. The new process benefited extensively from the results of the quality control process that ILRS provides on a daily basis as a feedback to the stations, and the recovery of systematic error corrections from the data themselves through targeted investigations. The present re-analysis extends from 1983 to the end of 2013. The data quality for the early period 1983-1993 is significantly poorer than for the recent years. However, it contributes to the overall stability of the datum definition, especially in terms of its origin and scale and, as the more recent and higher quality data accumulate, the significance of the early data will progressively diminish. As in the case of ITRF2008, station engineers and analysts have worked together to determine the magnitude and cause of systematic errors that were noticed during the analysis, rationalize them based on events at the stations, and develop appropriate corrections whenever possible. This presentation will give an overview of the process and examples from the various steps.

  1. Construction of the Second Quito Astrolabe Catalogue

    NASA Astrophysics Data System (ADS)

    Kolesnik, Y. B.

    1994-03-01

    A method for astrolabe catalogue construction is presented. It is based on classical concepts, but the model of conditional equations for the group reduction is modified, additional parameters being introduced in the step- wise regressions. The chain adjustment is neglected, and the advantages of this approach are discussed. The method has been applied to the data obtained with the astrolabe of the Quito Astronomical Observatory from 1964 to 1983. Various characteristics of the catalogue produced with this method are compared with those due to the rigorous classical method. Some improvement both in systematic and random errors is outlined.

  2. Ten Commandments of Formal Methods...Ten Years Later

    NASA Technical Reports Server (NTRS)

    Bowen, Jonathan P.; Hinchey, Michael G.

    2006-01-01

    More than a decade ago, in "Ten Commandments of Formal Methods," we offered practical guidelines for projects that sought to use formal methods. Over the years, the article, which was based on our knowledge of successful industrial projects, has been widely cited and has generated much positive feedback. However, despite this apparent enthusiasm, formal methods use has not greatly increased, and some of the same attitudes about the infeasibility of adopting them persist. Formal methodists believe that introducing greater rigor will improve the software development process and yield software with better structure, greater maintainability, and fewer errors.

  3. Quest for quality care and patient safety: the case of Singapore

    PubMed Central

    Lim, M

    2004-01-01

    

 Quality of care in Singapore has seen a paradigm shift from a traditional focus on structural approaches to a broader multidimensional concept which includes the monitoring of clinical indicators and medical errors. Strong political commitment and institutional capacities have been important factors for making the transition. What is still lacking, however, is a culture of rigorous programme evaluation, public involvement, and patient empowerment. Despite these imperfections, Singapore has made considerable strides and its experience may hold lessons for other small developing countries in the common quest for quality care and patient safety. PMID:14757804

  4. Stable long-time semiclassical description of zero-point energy in high-dimensional molecular systems.

    PubMed

    Garashchuk, Sophya; Rassolov, Vitaly A

    2008-07-14

    Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.

  5. Well-tempered metadynamics: a smoothly-converging and tunable free-energy method

    NASA Astrophysics Data System (ADS)

    Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele

    2008-03-01

    We present [1] a method for determining the free energy dependence on a selected number of order parameters using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevantregions of the order parameter space. The algorithm is tested on the reconstruction of alanine dipeptide free energy landscape. [1] A. Barducci, G. Bussi and M. Parrinello, Phys. Rev. Lett., accepted (2007).

  6. Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Landman, Drew

    2015-01-01

    Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.

  7. Adaptive neural output-feedback control for nonstrict-feedback time-delay fractional-order systems with output constraints and actuator nonlinearities.

    PubMed

    Zouari, Farouk; Ibeas, Asier; Boulkroune, Abdesselem; Cao, Jinde; Mehdi Arefi, Mohammad

    2018-06-01

    This study addresses the issue of the adaptive output tracking control for a category of uncertain nonstrict-feedback delayed incommensurate fractional-order systems in the presence of nonaffine structures, unmeasured pseudo-states, unknown control directions, unknown actuator nonlinearities and output constraints. Firstly, the mean value theorem and the Gaussian error function are introduced to eliminate the difficulties that arise from the nonaffine structures and the unknown actuator nonlinearities, respectively. Secondly, the immeasurable tracking error variables are suitably estimated by constructing a fractional-order linear observer. Thirdly, the neural network, the Razumikhin Lemma, the variable separation approach, and the smooth Nussbaum-type function are used to deal with the uncertain nonlinear dynamics, the unknown time-varying delays, the nonstrict feedback and the unknown control directions, respectively. Fourthly, asymmetric barrier Lyapunov functions are employed to overcome the violation of the output constraints and to tune online the parameters of the adaptive neural controller. Through rigorous analysis, it is proved that the boundedness of all variables in the closed-loop system and the semi global asymptotic tracking are ensured without transgression of the constraints. The principal contributions of this study can be summarized as follows: (1) based on Caputo's definitions and new lemmas, methods concerning the controllability, observability and stability analysis of integer-order systems are extended to fractional-order ones, (2) the output tracking objective for a relatively large class of uncertain systems is achieved with a simple controller and less tuning parameters. Finally, computer-simulation studies from the robotic field are given to demonstrate the effectiveness of the proposed controller. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Sunspot activity and influenza pandemics: a statistical assessment of the purported association.

    PubMed

    Towers, S

    2017-10-01

    Since 1978, a series of papers in the literature have claimed to find a significant association between sunspot activity and the timing of influenza pandemics. This paper examines these analyses, and attempts to recreate the three most recent statistical analyses by Ertel (1994), Tapping et al. (2001), and Yeung (2006), which all have purported to find a significant relationship between sunspot numbers and pandemic influenza. As will be discussed, each analysis had errors in the data. In addition, in each analysis arbitrary selections or assumptions were also made, and the authors did not assess the robustness of their analyses to changes in those arbitrary assumptions. Varying the arbitrary assumptions to other, equally valid, assumptions negates the claims of significance. Indeed, an arbitrary selection made in one of the analyses appears to have resulted in almost maximal apparent significance; changing it only slightly yields a null result. This analysis applies statistically rigorous methodology to examine the purported sunspot/pandemic link, using more statistically powerful un-binned analysis methods, rather than relying on arbitrarily binned data. The analyses are repeated using both the Wolf and Group sunspot numbers. In all cases, no statistically significant evidence of any association was found. However, while the focus in this particular analysis was on the purported relationship of influenza pandemics to sunspot activity, the faults found in the past analyses are common pitfalls; inattention to analysis reproducibility and robustness assessment are common problems in the sciences, that are unfortunately not noted often enough in review.

  9. Which Interventions Have the Greatest Effect on Student Learning in Sub-Saharan Africa? "A Meta-Analysis of Rigorous Impact Evaluations"

    ERIC Educational Resources Information Center

    Conn, Katharine

    2014-01-01

    In the last three decades, there has been a large increase in the number of rigorous experimental and quasi-experimental evaluations of education programs in developing countries. These impact evaluations have taken place all over the globe, including a large number in Sub-Saharan Africa (SSA). The fact that the developing world is socially and…

  10. Space radiator simulation manual for computer code

    NASA Technical Reports Server (NTRS)

    Black, W. Z.; Wulff, W.

    1972-01-01

    A computer program that simulates the performance of a space radiator is presented. The program basically consists of a rigorous analysis which analyzes a symmetrical fin panel and an approximate analysis that predicts system characteristics for cases of non-symmetrical operation. The rigorous analysis accounts for both transient and steady state performance including aerodynamic and radiant heating of the radiator system. The approximate analysis considers only steady state operation with no aerodynamic heating. A description of the radiator system and instructions to the user for program operation is included. The input required for the execution of all program options is described. Several examples of program output are contained in this section. Sample output includes the radiator performance during ascent, reentry and orbit.

  11. Optimal design and evaluation of a color separation grating using rigorous coupled wave analysis

    NASA Astrophysics Data System (ADS)

    Nagayoshi, Mayumi; Oka, Keiko; Klaus, Werner; Komai, Yuki; Kodate, Kashiko

    2006-02-01

    In recent years, the technology which separates white light into the three primary colors of Red (R), Green (G) and Blue (B) and adjusts each optical intensity and composites R, G and B to display various colors is required in the development and spread of color visual equipments. Various color separation devices have been proposed and have been put to practical use in color visual equipments. We have focused on a small and light grating-type device which has the possibility of reduction in cost and large-scale production and generates only the three primary colors of R, G and B so that a high saturation level can be obtained. To perform a rigorous analysis and design of color separation gratings, our group has developed a program that is based on the Rigorous Coupled Wave Analysis (RCWA). We then calculated the parameters to obtain a diffraction efficiency of higher than 70% and the color gamut of about 70%. We will report on the design, fabrication and evaluation of color separation gratings that have been optimized for fabrication by laser drawing.

  12. A study on the theoretical and practical accuracy of conoscopic holography-based surface measurements: toward image registration in minimally invasive surgery†

    PubMed Central

    Burgner, J.; Simpson, A. L.; Fitzpatrick, J. M.; Lathrop, R. A.; Herrell, S. D.; Miga, M. I.; Webster, R. J.

    2013-01-01

    Background Registered medical images can assist with surgical navigation and enable image-guided therapy delivery. In soft tissues, surface-based registration is often used and can be facilitated by laser surface scanning. Tracked conoscopic holography (which provides distance measurements) has been recently proposed as a minimally invasive way to obtain surface scans. Moving this technique from concept to clinical use requires a rigorous accuracy evaluation, which is the purpose of our paper. Methods We adapt recent non-homogeneous and anisotropic point-based registration results to provide a theoretical framework for predicting the accuracy of tracked distance measurement systems. Experiments are conducted a complex objects of defined geometry, an anthropomorphic kidney phantom and a human cadaver kidney. Results Experiments agree with model predictions, producing point RMS errors consistently < 1 mm, surface-based registration with mean closest point error < 1 mm in the phantom and a RMS target registration error of 0.8 mm in the human cadaver kidney. Conclusions Tracked conoscopic holography is clinically viable; it enables minimally invasive surface scan accuracy comparable to current clinical methods that require open surgery. PMID:22761086

  13. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  14. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.

  15. Rigorous Electromagnetic Analysis of the Focusing Action of Refractive Cylindrical Microlens

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Gu, Ben-Yuan; Dong, Bi-Zhen; Yang, Guo-Zhen

    The focusing action of refractive cylindrical microlens is investigated based on the rigorous electromagnetic theory with the use of the boundary element method. The focusing behaviors of these refractive microlenses with continuous and multilevel surface-envelope are characterized in terms of total electric-field patterns, the electric-field intensity distributions on the focal plane, and their diffractive efficiencies at the focal spots. The obtained results are also compared with the ones obtained by Kirchhoff's scalar diffraction theory. The present numerical and graphical results may provide useful information for the analysis and design of refractive elements in micro-optics.

  16. Z-scan theoretical and experimental studies for accurate measurements of the nonlinear refractive index and absorption of optical glasses near damage threshold

    NASA Astrophysics Data System (ADS)

    Olivier, Thomas; Billard, Franck; Akhouayri, Hassan

    2004-06-01

    Self-focusing is one of the dramatic phenomena that may occur during the propagation of a high power laser beam in a nonlinear material. This phenomenon leads to a degradation of the wave front and may also lead to a photoinduced damage of the material. Realistic simulations of the propagation of high power laser beams require an accurate knowledge of the nonlinear refractive index γ. In the particular case of fused silica and in the nanosecond regime, it seems that electronic mechanisms as well as electrostriction and thermal effects can lead to a significant refractive index variation. Compared to the different methods used to measure this parmeter, the Z-scan method is simple, offers a good sensitivity and may give absolute measurements if the incident beam is accurately studied. However, this method requires a very good knowledge of the incident beam and of its propagation inside a nonlinear sample. We used a split-step propagation algorithm to simlate Z-scan curves for arbitrary beam shape, sample thickness and nonlinear phase shift. According to our simulations and a rigorous analysis of the Z-scan measured signal, it appears that some abusive approximations lead to very important errors. Thus, by reducing possible errors on the interpretation of Z-scan experimental studies, we performed accurate measurements of the nonlinear refractive index of fused silica that show the significant contribution of nanosecond mechanisms.

  17. Methods for determining time of death.

    PubMed

    Madea, Burkhard

    2016-12-01

    Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.

  18. Rigorous diffraction analysis using geometrical theory of diffraction for future mask technology

    NASA Astrophysics Data System (ADS)

    Chua, Gek S.; Tay, Cho J.; Quan, Chenggen; Lin, Qunying

    2004-05-01

    Advanced lithographic techniques such as phase shift masks (PSM) and optical proximity correction (OPC) result in a more complex mask design and technology. In contrast to the binary masks, which have only transparent and nontransparent regions, phase shift masks also take into consideration transparent features with a different optical thickness and a modified phase of the transmitted light. PSM are well-known to show prominent diffraction effects, which cannot be described by the assumption of an infinitely thin mask (Kirchhoff approach) that is used in many commercial photolithography simulators. A correct prediction of sidelobe printability, process windows and linearity of OPC masks require the application of rigorous diffraction theory. The problem of aerial image intensity imbalance through focus with alternating Phase Shift Masks (altPSMs) is performed and compared between a time-domain finite-difference (TDFD) algorithm (TEMPEST) and Geometrical theory of diffraction (GTD). Using GTD, with the solution to the canonical problems, we obtained a relationship between the edge on the mask and the disturbance in image space. The main interest is to develop useful formulations that can be readily applied to solve rigorous diffraction for future mask technology. Analysis of rigorous diffraction effects for altPSMs using GTD approach will be discussed.

  19. Evaluating Manufacturing and Assembly Errors in Rotating Machinery to Enhance Component Performance

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Huff, Edward M.; Swanson, Keith (Technical Monitor)

    2001-01-01

    Manufacturing and assembly phases play a crucial role in providing products that meet the strict functional specifications associated with rotating machinery components. The errors resulting during the manufacturing and assembly of such components are correlated with the vibration and noise emanating from the final system during its operational lifetime. Vibration and noise are especially unacceptable elements in high-risk systems such as helicopters, resulting in premature component degradation and an unsafe flying environment. In such applications, individual components often are subject to 100% inspection prior to assembly, as well as during operation through rigorous maintenance, resulting in increased product development cycles and high production and operation costs. In this work, we focus on providing designers and manufacturing engineers with a technique to evaluate vibration modes and levels for each component or subsystem prior to putting them into operation. This paper presents a preliminary investigation of the correlation between vibrations and manufacturing and assembly errors using an experimental test rig, which simulates a simple bearing and shaft arrangement. A factorial design is used to study the effects of: 1) different manufacturing instances; 2) different assembly instances; and, 3) varying shaft speeds. The results indicate a correlation between manufacturing or assembly errors and vibrations measured from accelerometers. Challenges in developing a tool for DFM are identified, followed by a discussion of future work, including a real-world application to helicopter transmission vibrations.

  20. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration

    PubMed Central

    Doss, Hani; Tan, Aixin

    2017-01-01

    In the classical biased sampling problem, we have k densities π1(·), …, πk(·), each known up to a normalizing constant, i.e. for l = 1, …, k, πl(·) = νl(·)/ml, where νl(·) is a known function and ml is an unknown constant. For each l, we have an iid sample from πl,·and the problem is to estimate the ratios ml/ms for all l and all s. This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the πl’s are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case. PMID:28706463

  1. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration.

    PubMed

    Doss, Hani; Tan, Aixin

    2014-09-01

    In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.

  2. An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun

    2014-05-01

    Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.

  3. Uncertainty quantification in application of the enrichment meter principle for nondestructive assay of special nuclear material

    DOE PAGES

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    2015-09-05

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  4. Fully implicit moving mesh adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Serazio, C.; Chacon, L.; Lapenta, G.

    2006-10-01

    In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)

  5. Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System

    NASA Astrophysics Data System (ADS)

    Goluskin, David

    2018-04-01

    We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.

  6. Improved key-rate bounds for practical decoy-state quantum-key-distribution systems

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng

    2017-01-01

    The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.

  7. On the formulation of gravitational potential difference between the GRACE satellites based on energy integral in Earth fixed frame

    NASA Astrophysics Data System (ADS)

    Zeng, Y. Y.; Guo, J. Y.; Shang, K.; Shum, C. K.; Yu, J. H.

    2015-09-01

    Two methods for computing gravitational potential difference (GPD) between the GRACE satellites using orbit data have been formulated based on energy integral; one in geocentric inertial frame (GIF) and another in Earth fixed frame (EFF). Here we present a rigorous theoretical formulation in EFF with particular emphasis on necessary approximations, provide a computational approach to mitigate the approximations to negligible level, and verify our approach using simulations. We conclude that a term neglected or ignored in all former work without verification should be retained. In our simulations, 2 cycle per revolution (CPR) errors are present in the GPD computed using our formulation, and empirical removal of the 2 CPR and lower frequency errors can improve the precisions of Stokes coefficients (SCs) of degree 3 and above by 1-2 orders of magnitudes. This is despite of the fact that the result without removing these errors is already accurate enough. Furthermore, the relation between data errors and their influences on GPD is analysed, and a formal examination is made on the possible precision that real GRACE data may attain. The result of removing 2 CPR errors may imply that, if not taken care of properly, the values of SCs computed by means of the energy integral method using real GRACE data may be seriously corrupted by aliasing errors from possibly very large 2 CPR errors based on two facts: (1) errors of bar C_{2,0} manifest as 2 CPR errors in GPD and (2) errors of bar C_{2,0} in GRACE data-the differences between the CSR monthly values of bar C_{2,0} independently determined using GRACE and SLR are a reasonable measure of their magnitude-are very large. Our simulations show that, if 2 CPR errors in GPD vary from day to day as much as those corresponding to errors of bar C_{2,0} from month to month, the aliasing errors of degree 15 and above SCs computed using a month's GPD data may attain a level comparable to the magnitude of gravitational potential variation signal that GRACE was designed to recover. Consequently, we conclude that aliasing errors from 2 CPR errors in real GRACE data may be very large if not properly handled; and therefore, we propose an approach to reduce aliasing errors from 2 CPR and lower frequency errors for computing SCs above degree 2.

  8. Attitude output feedback control for rigid spacecraft with finite-time convergence.

    PubMed

    Hu, Qinglei; Niu, Guanglin

    2017-09-01

    The main problem addressed is the quaternion-based attitude stabilization control of rigid spacecraft without angular velocity measurements in the presence of external disturbances and reaction wheel friction as well. As a stepping stone, an angular velocity observer is proposed for the attitude control of a rigid body in the absence of angular velocity measurements. The observer design ensures finite-time convergence of angular velocity state estimation errors irrespective of the control torque or the initial attitude state of the spacecraft. Then, a novel finite-time control law is employed as the controller in which the estimate of the angular velocity is used directly. It is then shown that the observer and the controlled system form a cascaded structure, which allows the application of the finite-time stability theory of cascaded systems to prove the finite-time stability of the closed-loop system. A rigorous analysis of the proposed formulation is provided and numerical simulation studies are presented to help illustrate the effectiveness of the angular-velocity observer for rigid spacecraft attitude control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Multi-species Identification of Polymorphic Peptide Variants via Propagation in Spectral Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Seungjin; Payne, Samuel H.; Bandeira, Nuno

    The spectral networks approach enables the detection of pairs of spectra from related peptides and thus allows for the propagation of annotations from identified peptides to unidentified spectra. Beyond allowing for unbiased discovery of unexpected post-translational modifications, spectral networks are also applicable to multi-species comparative proteomics or metaproteomics to identify numerous orthologous versions of a protein. We present algorithmic and statistical advances in spectral networks that have made it possible to rigorously assess the statistical significance of spectral pairs and accurately estimate the error rate of identifications via propagation. In the analysis of three related Cyanothece species, a model organismmore » for biohydrogen production, spectral networks identified peptides with highly divergent sequences with up to dozens of variants per peptide, including many novel peptides in species that lack a sequenced genome. Furthermore, spectral networks strongly suggested the presence of novel peptides even in genomically characterized species (i.e. missing from databases) in that a significant portion of unidentified multi-species networks included at least two polymorphic peptide variants.« less

  10. Antineutrino analysis for continuous monitoring of nuclear reactors: Sensitivity study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Christopher; Erickson, Anna

    This paper explores the various contributors to uncertainty on predictions of the antineutrino source term which is used for reactor antineutrino experiments and is proposed as a safeguard mechanism for future reactor installations. The errors introduced during simulation of the reactor burnup cycle from variation in nuclear reaction cross sections, operating power, and other factors are combined with those from experimental and predicted antineutrino yields, resulting from fissions, evaluated, and compared. The most significant contributor to uncertainty on the reactor antineutrino source term when the reactor was modeled in 3D fidelity with assembly-level heterogeneity was found to be the uncertaintymore » on the antineutrino yields. Using the reactor simulation uncertainty data, the dedicated observation of a rigorously modeled small, fast reactor by a few-ton near-field detector was estimated to offer reduction of uncertainty on antineutrino yields in the 3.0–6.5 MeV range to a few percent for the primary power-producing fuel isotopes, even with zero prior knowledge of the yields.« less

  11. Adaptive convex combination approach for the identification of improper quaternion processes.

    PubMed

    Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P

    2014-01-01

    Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).

  12. On the derivation of approximations to cellular automata models and the assumption of independence.

    PubMed

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Opening up the black box: an introduction to qualitative research methods in anaesthesia.

    PubMed

    Shelton, C L; Smith, A F; Mort, M

    2014-03-01

    Qualitative research methods are a group of techniques designed to allow the researcher to understand phenomena in their natural setting. A wide range is used, including focus groups, interviews, observation, and discourse analysis techniques, which may be used within research approaches such as grounded theory or ethnography. Qualitative studies in the anaesthetic setting have been used to define excellence in anaesthesia, explore the reasons behind drug errors, investigate the acquisition of expertise and examine incentives for hand-hygiene in the operating theatre. Understanding how and why people act the way they do is essential for the advancement of anaesthetic practice, and rigorous, well-designed qualitative research can generate useful data and important insights. Meticulous social scientific methods, transparency, reproducibility and reflexivity are markers of quality in qualitative research. Tools such as the consolidated criteria for reporting qualitative research checklist and the critical appraisal skills programme are available to help authors, reviewers and readers unfamiliar with qualitative research assess its merits. © 2013 The Association of Anaesthetists of Great Britain and Ireland.

  14. Hierarchic Extensions in the Static and Dynamic Analysis of Elastic Beams. Ph.D. Thesis, 1990 Final Report, May 1990

    NASA Technical Reports Server (NTRS)

    Watson, Robert A.

    1991-01-01

    Approximate solutions of static and dynamic beam problems by the p-version of the finite element method are investigated. Within a hierarchy of engineering beam idealizations, rigorous formulations of the strain and kinetic energies for straight and circular beam elements are presented. These formulations include rotating coordinate system effects and geometric nonlinearities to allow for the evaluation of vertical axis wind turbines, the motivating problem for this research. Hierarchic finite element spaces, based on extensions of the polynomial orders used to approximate the displacement variables, are constructed. The developed models are implemented into a general purpose computer program for evaluation. Quality control procedures are examined for a diverse set of sample problems. These procedures include estimating discretization errors in energy norm and natural frequencies, performing static and dynamic equilibrium checks, observing convergence for qualities of interest, and comparison with more exacting theories and experimental data. It is demonstrated that p-extensions produce exponential rates of convergence in the approximation of strain energy and natural frequencies for the class of problems investigated.

  15. A 2D multi-term time and space fractional Bloch-Torrey model based on bilinear rectangular finite elements

    NASA Astrophysics Data System (ADS)

    Qin, Shanlin; Liu, Fawang; Turner, Ian W.

    2018-03-01

    The consideration of diffusion processes in magnetic resonance imaging (MRI) signal attenuation is classically described by the Bloch-Torrey equation. However, many recent works highlight the distinct deviation in MRI signal decay due to anomalous diffusion, which motivates the fractional order generalization of the Bloch-Torrey equation. In this work, we study the two-dimensional multi-term time and space fractional diffusion equation generalized from the time and space fractional Bloch-Torrey equation. By using the Galerkin finite element method with a structured mesh consisting of rectangular elements to discretize in space and the L1 approximation of the Caputo fractional derivative in time, a fully discrete numerical scheme is derived. A rigorous analysis of stability and error estimation is provided. Numerical experiments in the square and L-shaped domains are performed to give an insight into the efficiency and reliability of our method. Then the scheme is applied to solve the multi-term time and space fractional Bloch-Torrey equation, which shows that the extra time derivative terms impact the relaxation process.

  16. Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.

    PubMed

    Chen, Mou; Tao, Gang

    2016-08-01

    In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.

  17. The causes of and factors associated with prescribing errors in hospital inpatients: a systematic review.

    PubMed

    Tully, Mary P; Ashcroft, Darren M; Dornan, Tim; Lewis, Penny J; Taylor, David; Wass, Val

    2009-01-01

    Prescribing errors are common, they result in adverse events and harm to patients and it is unclear how best to prevent them because recommendations are more often based on surmized rather than empirically collected data. The aim of this systematic review was to identify all informative published evidence concerning the causes of and factors associated with prescribing errors in specialist and non-specialist hospitals, collate it, analyse it qualitatively and synthesize conclusions from it. Seven electronic databases were searched for articles published between 1985-July 2008. The reference lists of all informative studies were searched for additional citations. To be included, a study had to be of handwritten prescriptions for adult or child inpatients that reported empirically collected data on the causes of or factors associated with errors. Publications in languages other than English and studies that evaluated errors for only one disease, one route of administration or one type of prescribing error were excluded. Seventeen papers reporting 16 studies, selected from 1268 papers identified by the search, were included in the review. Studies from the US and the UK in university-affiliated hospitals predominated (10/16 [62%]). The definition of a prescribing error varied widely and the included studies were highly heterogeneous. Causes were grouped according to Reason's model of accident causation into active failures, error-provoking conditions and latent conditions. The active failure most frequently cited was a mistake due to inadequate knowledge of the drug or the patient. Skills-based slips and memory lapses were also common. Where error-provoking conditions were reported, there was at least one per error. These included lack of training or experience, fatigue, stress, high workload for the prescriber and inadequate communication between healthcare professionals. Latent conditions included reluctance to question senior colleagues and inadequate provision of training. Prescribing errors are often multifactorial, with several active failures and error-provoking conditions often acting together to cause them. In the face of such complexity, solutions addressing a single cause, such as lack of knowledge, are likely to have only limited benefit. Further rigorous study, seeking potential ways of reducing error, needs to be conducted. Multifactorial interventions across many parts of the system are likely to be required.

  18. Failure-Modes-And-Effects Analysis Of Software Logic

    NASA Technical Reports Server (NTRS)

    Garcia, Danny; Hartline, Thomas; Minor, Terry; Statum, David; Vice, David

    1996-01-01

    Rigorous analysis applied early in design effort. Method of identifying potential inadequacies and modes and effects of failures caused by inadequacies (failure-modes-and-effects analysis or "FMEA" for short) devised for application to software logic.

  19. Chain representations of Open Quantum Systems and Lieb-Robinson like bounds for the dynamics

    NASA Astrophysics Data System (ADS)

    Woods, Mischa

    2013-03-01

    This talk is concerned with the mapping of the Hamiltonian of open quantum systems onto chain representations, which forms the basis for a rigorous theory of the interaction of a system with its environment. This mapping progresses as an interaction which gives rise to a sequence of residual spectral densities of the system. The rigorous mathematical properties of this mapping have been unknown so far. Here we develop the theory of secondary measures to derive an analytic, expression for the sequence solely in terms of the initial measure and its associated orthogonal polynomials of the first and second kind. These mappings can be thought of as taking a highly nonlocal Hamiltonian to a local Hamiltonian. In the latter, a Lieb-Robinson like bound for the dynamics of the open quantum system makes sense. We develop analytical bounds on the error to observables of the system as a function of time when the semi-infinite chain in truncated at some finite length. The fact that this is possible shows that there is a finite ``Speed of sound'' in these chain representations. This has many implications of the simulatability of open quantum systems of this type and demonstrates that a truncated chain can faithfully reproduce the dynamics at shorter times. These results make a significant and mathematically rigorous contribution to the understanding of the theory of open quantum systems; and pave the way towards the efficient simulation of these systems, which within the standard methods, is often an intractable problem. EPSRC CDT in Controlled Quantum Dynamics, EU STREP project and Alexander von Humboldt Foundation

  20. The DOZZ formula from the path integral

    NASA Astrophysics Data System (ADS)

    Kupiainen, Antti; Rhodes, Rémi; Vargas, Vincent

    2018-05-01

    We present a rigorous proof of the Dorn, Otto, Zamolodchikov, Zamolodchikov formula (the DOZZ formula) for the 3 point structure constants of Liouville Conformal Field Theory (LCFT) starting from a rigorous probabilistic construction of the functional integral defining LCFT given earlier by the authors and David. A crucial ingredient in our argument is a probabilistic derivation of the reflection relation in LCFT based on a refined tail analysis of Gaussian multiplicative chaos measures.

  1. Robust approximation-free prescribed performance control for nonlinear systems and its application

    NASA Astrophysics Data System (ADS)

    Sun, Ruisheng; Na, Jing; Zhu, Bin

    2018-02-01

    This paper presents a robust prescribed performance control approach and its application to nonlinear tail-controlled missile systems with unknown dynamics and uncertainties. The idea of prescribed performance function (PPF) is incorporated into the control design, such that both the steady-state and transient control performance can be strictly guaranteed. Unlike conventional PPF-based control methods, we further tailor a recently proposed systematic control design procedure (i.e. approximation-free control) using the transformed tracking error dynamics, which provides a proportional-like control action. Hence, the function approximators (e.g. neural networks, fuzzy systems) that are widely used to address the unknown nonlinearities in the nonlinear control designs are not needed. The proposed control design leads to a robust yet simplified function approximation-free control for nonlinear systems. The closed-loop system stability and the control error convergence are all rigorously proved. Finally, comparative simulations are conducted based on nonlinear missile systems to validate the improved response and the robustness of the proposed control method.

  2. Optical proximity correction for anamorphic extreme ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas

    2017-10-01

    The change from isomorphic to anamorphic optics in high numerical aperture (NA) extreme ultraviolet (EUV) scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated, and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking (MRC). OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs which are more tolerant to mask errors.

  3. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint.

    PubMed

    Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong

    2015-12-02

    For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  4. Towards tests of quark-hadron duality with functional analysis and spectral function data

    NASA Astrophysics Data System (ADS)

    Boito, Diogo; Caprini, Irinel

    2017-04-01

    The presence of terms that violate quark-hadron duality in the expansion of QCD Green's functions is a generally accepted fact. Recently, a new approach was proposed for the study of duality violations (DVs), which exploits the existence of a rigorous lower bound on the functional distance, measured in a certain norm, between a "true" correlator and its approximant calculated theoretically along a contour in the complex energy plane. In the present paper, we pursue the investigation of functional-analysis-based tests towards their application to real spectral function data. We derive a closed analytic expression for the minimal functional distance based on the general weighted L2 norm and discuss its relation with the distance measured in the L∞ norm. Using fake data sets obtained from a realistic toy model in which we allow for covariances inspired from the publicly available ALEPH spectral functions, we obtain, by Monte Carlo simulations, the statistical distribution of the strength parameter that measures the magnitude of the DV term added to the usual operator product expansion. The results show that, if the region with large errors near the end point of the spectrum in τ decays is excluded, the functional-analysis-based tests using either L2 or L∞ norms are able to detect, in a statistically significant way, the presence of DVs in realistic spectral function pseudodata.

  5. Intrinsic measurement errors for the speed of light in vacuum

    NASA Astrophysics Data System (ADS)

    Braun, Daniel; Schneiter, Fabienne; Fischer, Uwe R.

    2017-09-01

    The speed of light in vacuum, one of the most important and precisely measured natural constants, is fixed by convention to c=299 792 458 m s-1 . Advanced theories predict possible deviations from this universal value, or even quantum fluctuations of c. Combining arguments from quantum parameter estimation theory and classical general relativity, we here establish rigorously the existence of lower bounds on the uncertainty to which the speed of light in vacuum can be determined in a given region of space-time, subject to several reasonable restrictions. They provide a novel perspective on the experimental falsifiability of predictions for the quantum fluctuations of space-time.

  6. Implicit filtered P{sub N} for high-energy density thermal radiation transport using discontinuous Galerkin finite elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laboure, Vincent M., E-mail: vincent.laboure@tamu.edu; McClarren, Ryan G., E-mail: rgm@tamu.edu; Hauck, Cory D., E-mail: hauckc@ornl.gov

    2016-09-15

    In this work, we provide a fully-implicit implementation of the time-dependent, filtered spherical harmonics (FP{sub N}) equations for non-linear, thermal radiative transfer. We investigate local filtering strategies and analyze the effect of the filter on the conditioning of the system, showing in particular that the filter improves the convergence properties of the iterative solver. We also investigate numerically the rigorous error estimates derived in the linear setting, to determine whether they hold also for the non-linear case. Finally, we simulate a standard test problem on an unstructured mesh and make comparisons with implicit Monte Carlo (IMC) calculations.

  7. Educational Testing and Validity of Conclusions in the Scholarship of Teaching and Learning

    PubMed Central

    Beltyukova, Svetlana A.; Martin, Beth A.

    2013-01-01

    Validity and its integral evidence of reliability are fundamentals for educational and psychological measurement, and standards of educational testing. Herein, we describe these standards of educational testing, along with their subtypes including internal consistency, inter-rater reliability, and inter-rater agreement. Next, related issues of measurement error and effect size are discussed. This article concludes with a call for future authors to improve reporting of psychometrics and practical significance with educational testing in the pharmacy education literature. By increasing the scientific rigor of educational research and reporting, the overall quality and meaningfulness of SoTL will be improved. PMID:24249848

  8. Scientific approaches to science policy.

    PubMed

    Berg, Jeremy M

    2013-11-01

    The development of robust science policy depends on use of the best available data, rigorous analysis, and inclusion of a wide range of input. While director of the National Institute of General Medical Sciences (NIGMS), I took advantage of available data and emerging tools to analyze training time distribution by new NIGMS grantees, the distribution of the number of publications as a function of total annual National Institutes of Health support per investigator, and the predictive value of peer-review scores on subsequent scientific productivity. Rigorous data analysis should be used to develop new reforms and initiatives that will help build a more sustainable American biomedical research enterprise.

  9. Resolution, uncertainty and data predictability of tomographic Lg attenuation models—application to Southeastern China

    NASA Astrophysics Data System (ADS)

    Chen, Youlin; Xie, Jiakang

    2017-07-01

    We address two fundamental issues that pertain to Q tomography using high-frequency regional waves, particularly the Lg wave. The first issue is that Q tomography uses complex 'reduced amplitude data' as input. These data are generated by taking the logarithm of the product of (1) the observed amplitudes and (2) the simplified 1D geometrical spreading correction. They are thereby subject to 'modeling errors' that are dominated by uncompensated 3D structural effects; however, no knowledge of the statistical behaviour of these errors exists to justify the widely used least-squares methods for solving Q tomography. The second issue is that Q tomography has been solved using various iterative methods such as LSQR (Least-Squares QR, where QR refers to a QR factorization of a matrix into the product of an orthogonal matrix Q and an upper triangular matrix R) and SIRT (Simultaneous Iterative Reconstruction Technique) that do not allow for the quantitative estimation of model resolution and error. In this study, we conduct the first rigorous analysis of the statistics of the reduced amplitude data and find that the data error distribution is predominantly normal, but with long-tailed outliers. This distribution is similar to that of teleseismic traveltime residuals. We develop a screening procedure to remove outliers so that data closely follow a normal distribution. Next, we develop an efficient tomographic method based on the PROPACK software package to perform singular value decomposition on a data kernel matrix, which enables us to solve for the inverse, model resolution and covariance matrices along with the optimal Q model. These matrices permit for various quantitative model appraisals, including the evaluation of the formal resolution and error. Further, they allow formal uncertainty estimates of predicted data (Q) along future paths to be made at any specified confidence level. This new capability significantly benefits the practical missions of source identification and source size estimation, for which reliable uncertainty estimates are especially important. We apply the new methodologies to data from southeastern China to obtain a 1 Hz Lg Q model, which exhibits patterns consistent with what is known about the geology and tectonics of the region. We also solve for the site response model.

  10. Evaluation of Classifier Performance for Multiclass Phenotype Discrimination in Untargeted Metabolomics.

    PubMed

    Trainor, Patrick J; DeFilippis, Andrew P; Rai, Shesh N

    2017-06-21

    Statistical classification is a critical component of utilizing metabolomics data for examining the molecular determinants of phenotypes. Despite this, a comprehensive and rigorous evaluation of the accuracy of classification techniques for phenotype discrimination given metabolomics data has not been conducted. We conducted such an evaluation using both simulated and real metabolomics datasets, comparing Partial Least Squares-Discriminant Analysis (PLS-DA), Sparse PLS-DA, Random Forests, Support Vector Machines (SVM), Artificial Neural Network, k -Nearest Neighbors ( k -NN), and Naïve Bayes classification techniques for discrimination. We evaluated the techniques on simulated data generated to mimic global untargeted metabolomics data by incorporating realistic block-wise correlation and partial correlation structures for mimicking the correlations and metabolite clustering generated by biological processes. Over the simulation studies, covariance structures, means, and effect sizes were stochastically varied to provide consistent estimates of classifier performance over a wide range of possible scenarios. The effects of the presence of non-normal error distributions, the introduction of biological and technical outliers, unbalanced phenotype allocation, missing values due to abundances below a limit of detection, and the effect of prior-significance filtering (dimension reduction) were evaluated via simulation. In each simulation, classifier parameters, such as the number of hidden nodes in a Neural Network, were optimized by cross-validation to minimize the probability of detecting spurious results due to poorly tuned classifiers. Classifier performance was then evaluated using real metabolomics datasets of varying sample medium, sample size, and experimental design. We report that in the most realistic simulation studies that incorporated non-normal error distributions, unbalanced phenotype allocation, outliers, missing values, and dimension reduction, classifier performance (least to greatest error) was ranked as follows: SVM, Random Forest, Naïve Bayes, sPLS-DA, Neural Networks, PLS-DA and k -NN classifiers. When non-normal error distributions were introduced, the performance of PLS-DA and k -NN classifiers deteriorated further relative to the remaining techniques. Over the real datasets, a trend of better performance of SVM and Random Forest classifier performance was observed.

  11. Analysis of fast boundary-integral approximations for modeling electrostatic contributions of molecular binding

    PubMed Central

    Kreienkamp, Amelia B.; Liu, Lucy Y.; Minkara, Mona S.; Knepley, Matthew G.; Bardhan, Jaydeep P.; Radhakrishnan, Mala L.

    2013-01-01

    We analyze and suggest improvements to a recently developed approximate continuum-electrostatic model for proteins. The model, called BIBEE/I (boundary-integral based electrostatics estimation with interpolation), was able to estimate electrostatic solvation free energies to within a mean unsigned error of 4% on a test set of more than 600 proteins—a significant improvement over previous BIBEE models. In this work, we tested the BIBEE/I model for its capability to predict residue-by-residue interactions in protein–protein binding, using the widely studied model system of trypsin and bovine pancreatic trypsin inhibitor (BPTI). Finding that the BIBEE/I model performs surprisingly less well in this task than simpler BIBEE models, we seek to explain this behavior in terms of the models’ differing spectral approximations of the exact boundary-integral operator. Calculations of analytically solvable systems (spheres and tri-axial ellipsoids) suggest two possibilities for improvement. The first is a modified BIBEE/I approach that captures the asymptotic eigenvalue limit correctly, and the second involves the dipole and quadrupole modes for ellipsoidal approximations of protein geometries. Our analysis suggests that fast, rigorous approximate models derived from reduced-basis approximation of boundary-integral equations might reach unprecedented accuracy, if the dipole and quadrupole modes can be captured quickly for general shapes. PMID:24466561

  12. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.

    PubMed

    Lash, Timothy L

    2017-09-15

    In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. The Evolution of a More Rigorous Approach to Benefit Transfer: Benefit Function Transfer

    NASA Astrophysics Data System (ADS)

    Loomis, John B.

    1992-03-01

    The desire for economic values of recreation for unstudied recreation resources dates back to the water resource development benefit-cost analyses of the early 1960s. Rather than simply applying existing estimates of benefits per trip to the study site, a fairly rigorous approach was developed by a number of economists. This approach involves application of travel cost demand equations and contingent valuation benefit functions from existing sites to the new site. In this way the spatial market of the new site (i.e., its differing own price, substitute prices and population distribution) is accounted for in the new estimate of total recreation benefits. The assumptions of benefit transfer from recreation sites in one state to another state for the same recreation activity is empirically tested. The equality of demand coefficients for ocean sport salmon fishing in Oregon versus Washington and for freshwater steelhead fishing in Oregon versus Idaho is rejected. Thus transfer of either demand equations or average benefits per trip are likely to be in error. Using the Oregon steelhead equation, benefit transfers to rivers within the state are shown to be accurate to within 5-15%.

  14. Relationship between Recent Flight Experience and Pilot Error General Aviation Accidents

    NASA Astrophysics Data System (ADS)

    Nilsson, Sarah J.

    Aviation insurance agents and fixed-base operation (FBO) owners use recent flight experience, as implied by the 90-day rule, to measure pilot proficiency in physical airplane skills, and to assess the likelihood of a pilot error accident. The generally accepted premise is that more experience in a recent timeframe predicts less of a propensity for an accident, all other factors excluded. Some of these aviation industry stakeholders measure pilot proficiency solely by using time flown within the past 90, 60, or even 30 days, not accounting for extensive research showing aeronautical decision-making and situational awareness training decrease the likelihood of a pilot error accident. In an effort to reduce the pilot error accident rate, the Federal Aviation Administration (FAA) has seen the need to shift pilot training emphasis from proficiency in physical airplane skills to aeronautical decision-making and situational awareness skills. However, current pilot training standards still focus more on the former than on the latter. The relationship between pilot error accidents and recent flight experience implied by the FAA's 90-day rule has not been rigorously assessed using empirical data. The intent of this research was to relate recent flight experience, in terms of time flown in the past 90 days, to pilot error accidents. A quantitative ex post facto approach, focusing on private pilots of single-engine general aviation (GA) fixed-wing aircraft, was used to analyze National Transportation Safety Board (NTSB) accident investigation archival data. The data were analyzed using t-tests and binary logistic regression. T-tests between the mean number of hours of recent flight experience of tricycle gear pilots involved in pilot error accidents (TPE) and non-pilot error accidents (TNPE), t(202) = -.200, p = .842, and conventional gear pilots involved in pilot error accidents (CPE) and non-pilot error accidents (CNPE), t(111) = -.271, p = .787, indicate there is no statistically significant relationship between groups. Binary logistic regression indicate that recent flight experience does not reliably distinguish between pilot error and non-pilot error accidents for TPE/TNPE, chi2 = 0.040 (df=1, p = .841) and CPE/CNPE, chi2= 0.074 (df =1, p = .786). Future research could focus on different pilot populations, and to broaden the scope, analyze several years of data.

  15. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.

    2014-10-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.

  16. Mechanical properties of frog skeletal muscles in iodoacetic acid rigor.

    PubMed Central

    Mulvany, M J

    1975-01-01

    1. Methods have been developed for describing the length: tension characteristics of frog skeletal muscles which go into rigor at 4 degrees C following iodoacetic acid poisoning either in the presence of Ca2+ (Ca-rigor) or its absence (Ca-free-rigor). 2. Such rigor muscles showed less resistance to slow stretch (slow rigor resistance) that to fast stretch (fast rigor resistance). The slow and fast rigor resistances of Ca-free-rigor muscles were much lower than those of Ca-rigor muscles. 3. The slow rigor resistance of Ca-rigor muscles was proportional to the amount of overlap between the contractile filaments present when the muscles were put into rigor. 4. Withdrawing Ca2+ from Ca-rigor muscles (induced-Ca-free rigor) reduced their slow and fast rigor resistances. Readdition of Ca2+ (but not Mg2+, Mn2+ or Sr2+) reversed the effect. 5. The slow and fast rigor resistances of Ca-rigor muscles (but not of Ca-free-rigor muscles) decreased with time. 6.The sarcomere structure of Ca-rigor and induced-Ca-free rigor muscles stretched by 0.2lo was destroyed in proportion to the amount of stretch, but the lengths of the remaining intact sarcomeres were essentially unchanged. This suggests that there had been a successive yielding of the weakeast sarcomeres. 7. The difference between the slow and fast rigor resistance and the effect of calcium on these resistances are discussed in relation to possible variations in the strength of crossbridges between the thick and thin filaments. Images Plate 1 Plate 2 PMID:1082023

  17. From virtual clustering analysis to self-consistent clustering analysis: a mathematical study

    NASA Astrophysics Data System (ADS)

    Tang, Shaoqiang; Zhang, Lei; Liu, Wing Kam

    2018-03-01

    In this paper, we propose a new homogenization algorithm, virtual clustering analysis (VCA), as well as provide a mathematical framework for the recently proposed self-consistent clustering analysis (SCA) (Liu et al. in Comput Methods Appl Mech Eng 306:319-341, 2016). In the mathematical theory, we clarify the key assumptions and ideas of VCA and SCA, and derive the continuous and discrete Lippmann-Schwinger equations. Based on a key postulation of "once response similarly, always response similarly", clustering is performed in an offline stage by machine learning techniques (k-means and SOM), and facilitates substantial reduction of computational complexity in an online predictive stage. The clear mathematical setup allows for the first time a convergence study of clustering refinement in one space dimension. Convergence is proved rigorously, and found to be of second order from numerical investigations. Furthermore, we propose to suitably enlarge the domain in VCA, such that the boundary terms may be neglected in the Lippmann-Schwinger equation, by virtue of the Saint-Venant's principle. In contrast, they were not obtained in the original SCA paper, and we discover these terms may well be responsible for the numerical dependency on the choice of reference material property. Since VCA enhances the accuracy by overcoming the modeling error, and reduce the numerical cost by avoiding an outer loop iteration for attaining the material property consistency in SCA, its efficiency is expected even higher than the recently proposed SCA algorithm.

  18. Fitting a function to time-dependent ensemble averaged data.

    PubMed

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  19. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  20. Colorimetric Characterization of Mobile Devices for Vision Applications.

    PubMed

    de Fez, Dolores; Luque, Maria José; García-Domene, Maria Carmen; Camps, Vicente; Piñero, David

    2016-01-01

    Available applications for vision testing in mobile devices usually do not include detailed setup instructions, sacrificing rigor to obtain portability and ease of use. In particular, colorimetric characterization processes are generally obviated. We show that different mobile devices differ also in colorimetric profile and that those differences limit the range of applications for which they are most adequate. The color reproduction characteristics of four mobile devices, two smartphones (Samsung Galaxy S4, iPhone 4s) and two tablets (Samsung Galaxy Tab 3, iPad 4), have been evaluated using two procedures: 3D LUT (Look Up Table) and a linear model assuming primary constancy and independence of the channels. The color reproduction errors have been computed with the CIEDE2000 color difference formula. There is good constancy of primaries but large deviations of additivity. The 3D LUT characterization yields smaller reproduction errors and dispersions for the Tab 3 and iPhone 4 devices, but for the iPad 4 and S4, both models are equally good. The smallest reproduction errors occur with both Apple devices, although the iPad 4 has the highest number of outliers of all devices with both colorimetric characterizations. Even though there is good constancy of primaries, the large deviations of additivity exhibited by the devices and the larger reproduction errors make any characterization based on channel independence not recommendable. The smartphone screens show, in average, the best color reproduction performance, particularly the iPhone 4, and therefore, they are more adequate for applications requiring precise color reproduction.

  1. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    DOE PAGES

    Ballantyne, A. P.; Andres, R.; Houghton, R.; ...

    2015-04-30

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr ₋1 in the 1960s to 0.3 Pg C yr ₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr ₋1 in the 1960s to almost 1.0 Pg C yr ₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO 2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.« less

  2. On analyticity of linear waves scattered by a layered medium

    NASA Astrophysics Data System (ADS)

    Nicholls, David P.

    2017-10-01

    The scattering of linear waves by periodic structures is a crucial phenomena in many branches of applied physics and engineering. In this paper we establish rigorous analytic results necessary for the proper numerical analysis of a class of High-Order Perturbation of Surfaces methods for simulating such waves. More specifically, we prove a theorem on existence and uniqueness of solutions to a system of partial differential equations which model the interaction of linear waves with a multiply layered periodic structure in three dimensions. This result provides hypotheses under which a rigorous numerical analysis could be conducted for recent generalizations to the methods of Operator Expansions, Field Expansions, and Transformed Field Expansions.

  3. An ArcGIS decision support tool for artificial reefs site selection (ArcGIS ARSS)

    NASA Astrophysics Data System (ADS)

    Stylianou, Stavros; Zodiatis, George

    2017-04-01

    Although the use and benefits of artificial reefs, both socio-economic and environmental, have been recognized with research and national development programmes worldwide their development is rarely subjected to a rigorous site selection process and the majority of the projects use the traditional (non-GIS) approach, based on trial and error mode. Recent studies have shown that the use of Geographic Information Systems, unlike to traditional methods, for the identification of suitable areas for artificial reefs siting seems to offer a number of distinct advantages minimizing possible errors, time and cost. A decision support tool (DSS) has been developed based on the existing knowledge, the multi-criteria decision analysis techniques and the GIS approach used in previous studies in order to help the stakeholders to identify the optimal locations for artificial reefs deployment on the basis of the physical, biological, oceanographic and socio-economic features of the sites. The tool provides to the users the ability to produce a final report with the results and suitability maps. The ArcGIS ARSS support tool runs within the existing ArcMap 10.2.x environment and for the development the VB .NET high level programming language has been used along with ArcObjects 10.2.x. Two local-scale case studies were conducted in order to test the application of the tool focusing on artificial reef siting. The results obtained from the case studies have shown that the tool can be successfully integrated within the site selection process in order to select objectively the optimal site for artificial reefs deployment.

  4. Modeling and Numerical Challenges in Eulerian-Lagrangian Computations of Shock-driven Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Diggs, Angela; Balachandar, Sivaramakrishnan

    2015-06-01

    The present work addresses the numerical methods required for particle-gas and particle-particle interactions in Eulerian-Lagrangian simulations of multiphase flow. Local volume fraction as seen by each particle is the quantity of foremost importance in modeling and evaluating such interactions. We consider a general multiphase flow with a distribution of particles inside a fluid flow discretized on an Eulerian grid. Particle volume fraction is needed both as a Lagrangian quantity associated with each particle and also as an Eulerian quantity associated with the flow. In Eulerian Projection (EP) methods, the volume fraction is first obtained within each cell as an Eulerian quantity and then interpolated to each particle. In Lagrangian Projection (LP) methods, the particle volume fraction is obtained at each particle and then projected onto the Eulerian grid. Traditionally, EP methods are used in multiphase flow, but sub-grid resolution can be obtained through use of LP methods. By evaluating the total error and its components we compare the performance of EP and LP methods. The standard von Neumann error analysis technique has been adapted for rigorous evaluation of rate of convergence. The methods presented can be extended to obtain accurate field representations of other Lagrangian quantities. Most importantly, we will show that such careful attention to numerical methodologies is needed in order to capture complex shock interaction with a bed of particles. Supported by U.S. Department of Defense SMART Program and the U.S. Department of Energy PSAAP-II program under Contract No. DE-NA0002378.

  5. Validation of a 30 m resolution flood hazard model of the conterminous United States

    NASA Astrophysics Data System (ADS)

    Wing, Oliver E. J.; Bates, Paul D.; Sampson, Christopher C.; Smith, Andrew M.; Johnson, Kris A.; Erickson, Tyler A.

    2017-09-01

    This paper reports the development of a ˜30 m resolution two-dimensional hydrodynamic model of the conterminous U.S. using only publicly available data. The model employs a highly efficient numerical solution of the local inertial form of the shallow water equations which simulates fluvial flooding in catchments down to 50 km2 and pluvial flooding in all catchments. Importantly, we use the U.S. Geological Survey (USGS) National Elevation Dataset to determine topography; the U.S. Army Corps of Engineers National Levee Dataset to explicitly represent known flood defenses; and global regionalized flood frequency analysis to characterize return period flows and rainfalls. We validate these simulations against the complete catalogue of Federal Emergency Management Agency (FEMA) Special Flood Hazard Area (SFHA) maps and detailed local hydraulic models developed by the USGS. Where the FEMA SFHAs are based on high-quality local models, the continental-scale model attains a hit rate of 86%. This correspondence improves in temperate areas and for basins above 400 km2. Against the higher quality USGS data, the average hit rate reaches 92% for the 1 in 100 year flood, and 90% for all flood return periods. Given typical hydraulic modeling uncertainties in the FEMA maps and USGS model outputs (e.g., errors in estimating return period flows), it is probable that the continental-scale model can replicate both to within error. The results show that continental-scale models may now offer sufficient rigor to inform some decision-making needs with dramatically lower cost and greater coverage than approaches based on a patchwork of local studies.

  6. Resemblance profiles as clustering decision criteria: Estimating statistical power, error, and correspondence for a hypothesis test for multivariate structure.

    PubMed

    Kilborn, Joshua P; Jones, David L; Peebles, Ernst B; Naar, David F

    2017-04-01

    Clustering data continues to be a highly active area of data analysis, and resemblance profiles are being incorporated into ecological methodologies as a hypothesis testing-based approach to clustering multivariate data. However, these new clustering techniques have not been rigorously tested to determine the performance variability based on the algorithm's assumptions or any underlying data structures. Here, we use simulation studies to estimate the statistical error rates for the hypothesis test for multivariate structure based on dissimilarity profiles (DISPROF). We concurrently tested a widely used algorithm that employs the unweighted pair group method with arithmetic mean (UPGMA) to estimate the proficiency of clustering with DISPROF as a decision criterion. We simulated unstructured multivariate data from different probability distributions with increasing numbers of objects and descriptors, and grouped data with increasing overlap, overdispersion for ecological data, and correlation among descriptors within groups. Using simulated data, we measured the resolution and correspondence of clustering solutions achieved by DISPROF with UPGMA against the reference grouping partitions used to simulate the structured test datasets. Our results highlight the dynamic interactions between dataset dimensionality, group overlap, and the properties of the descriptors within a group (i.e., overdispersion or correlation structure) that are relevant to resemblance profiles as a clustering criterion for multivariate data. These methods are particularly useful for multivariate ecological datasets that benefit from distance-based statistical analyses. We propose guidelines for using DISPROF as a clustering decision tool that will help future users avoid potential pitfalls during the application of methods and the interpretation of results.

  7. Vector scattering analysis of TPF coronagraph pupil masks

    NASA Astrophysics Data System (ADS)

    Ceperley, Daniel P.; Neureuther, Andrew R.; Lieber, Michael D.; Kasdin, N. Jeremy; Shih, Ta-Ming

    2004-10-01

    Rigorous finite-difference time-domain electromagnetic simulation is used to simulate the scattering from proto-typical pupil mask cross-section geometries and to quantify the differences from the normally assumed ideal on-off behavior. Shaped pupil plane masks are a promising technology for the TPF coronagraph mission. However the stringent requirements placed on the optics require that the detailed behavior of the edge-effects of these masks be examined carefully. End-to-end optical system simulation is essential and an important aspect is the polarization and cross-section dependent edge-effects which are the subject of this paper. Pupil plane masks are similar in many respects to photomasks used in the integrated circuit industry. Simulation capabilities such as the FDTD simulator, TEMPEST, developed for analyzing polarization and intensity imbalance effects in nonplanar phase-shifting photomasks, offer a leg-up in analyzing coronagraph masks. However, the accuracy in magnitude and phase required for modeling a chronograph system is extremely demanding and previously inconsequential errors may be of the same order of magnitude as the physical phenomena under study. In this paper, effects of thick masks, finite conductivity metals, and various cross-section geometries on the transmission of pupil-plane masks are illustrated. Undercutting the edge shape of Cr masks improves the effective opening width to within λ/5 of the actual opening but TE and TM polarizations require opposite compensations. The deviation from ideal is examined at the reference plane of the mask opening. Numerical errors in TEMPEST, such as numerical dispersion, perfectly matched layer reflections, and source haze are also discussed along with techniques for mitigating their impacts.

  8. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  9. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  10. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  11. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  12. Evaluation of Two Computational Techniques of Calculating Multipath Using Global Positioning System Carrier Phase Measurements

    NASA Technical Reports Server (NTRS)

    Gomez, Susan F.; Hood, Laura; Panneton, Robert J.; Saunders, Penny E.; Adkins, Antha; Hwu, Shian U.; Lu, Ba P.

    1996-01-01

    Two computational techniques are used to calculate differential phase errors on Global Positioning System (GPS) carrier war phase measurements due to certain multipath-producing objects. The two computational techniques are a rigorous computati electromagnetics technique called Geometric Theory of Diffraction (GTD) and the other is a simple ray tracing method. The GTD technique has been used successfully to predict microwave propagation characteristics by taking into account the dominant multipath components due to reflections and diffractions from scattering structures. The ray tracing technique only solves for reflected signals. The results from the two techniques are compared to GPS differential carrier phase ns taken on the ground using a GPS receiver in the presence of typical International Space Station (ISS) interference structures. The calculations produced using the GTD code compared to the measured results better than the ray tracing technique. The agreement was good, demonstrating that the phase errors due to multipath can be modeled and characterized using the GTD technique and characterized to a lesser fidelity using the DECAT technique. However, some discrepancies were observed. Most of the discrepancies occurred at lower devations and were either due to phase center deviations of the antenna, the background multipath environment, or the receiver itself. Selected measured and predicted differential carrier phase error results are presented and compared. Results indicate that reflections and diffractions caused by the multipath producers, located near the GPS antennas, can produce phase shifts of greater than 10 mm, and as high as 95 mm. It should be noted tl the field test configuration was meant to simulate typical ISS structures, but the two environments are not identical. The GZ and DECAT techniques have been used to calculate phase errors due to multipath o the ISS configuration to quantify the expected attitude determination errors.

  13. Digital morphogenesis via Schelling segregation

    NASA Astrophysics Data System (ADS)

    Barmpalias, George; Elwes, Richard; Lewis-Pye, Andrew

    2018-04-01

    Schelling’s model of segregation looks to explain the way in which particles or agents of two types may come to arrange themselves spatially into configurations consisting of large homogeneous clusters, i.e. connected regions consisting of only one type. As one of the earliest agent based models studied by economists and perhaps the most famous model of self-organising behaviour, it also has direct links to areas at the interface between computer science and statistical mechanics, such as the Ising model and the study of contagion and cascading phenomena in networks. While the model has been extensively studied it has largely resisted rigorous analysis, prior results from the literature generally pertaining to variants of the model which are tweaked so as to be amenable to standard techniques from statistical mechanics or stochastic evolutionary game theory. In Brandt et al (2012 Proc. 44th Annual ACM Symp. on Theory of Computing) provided the first rigorous analysis of the unperturbed model, for a specific set of input parameters. Here we provide a rigorous analysis of the model’s behaviour much more generally and establish some surprising forms of threshold behaviour, notably the existence of situations where an increased level of intolerance for neighbouring agents of opposite type leads almost certainly to decreased segregation.

  14. Rigorous Free-Fermion Entanglement Renormalization from Wavelet Theory

    NASA Astrophysics Data System (ADS)

    Haegeman, Jutho; Swingle, Brian; Walter, Michael; Cotler, Jordan; Evenbly, Glen; Scholz, Volkher B.

    2018-01-01

    We construct entanglement renormalization schemes that provably approximate the ground states of noninteracting-fermion nearest-neighbor hopping Hamiltonians on the one-dimensional discrete line and the two-dimensional square lattice. These schemes give hierarchical quantum circuits that build up the states from unentangled degrees of freedom. The circuits are based on pairs of discrete wavelet transforms, which are approximately related by a "half-shift": translation by half a unit cell. The presence of the Fermi surface in the two-dimensional model requires a special kind of circuit architecture to properly capture the entanglement in the ground state. We show how the error in the approximation can be controlled without ever performing a variational optimization.

  15. Translating the short version of the Perinatal Grief Scale: process and challenges.

    PubMed

    Capitulo, K L; Cornelio, M A; Lenz, E R

    2001-08-01

    Non-English-speaking populations may be excluded from rigorous clinical research because of the lack of reliable and valid instrumentation to measure psychosocial variables. The purpose of this article is to describe the process and challenges when translating a research instrument. The process will be illustrated in the project of translating into Spanish the Short Version of the Perinatal Grief Scale, extensively studied in English-speaking, primarily Caucasian populations. Translation methods, errors, and tips are included. Tools cannot be used in transcultural research and practice without careful and accurate translation and subsequent psychometric evaluation, which are essential to generate credible and valid findings. Copyright 2001 by W.B. Saunders Company

  16. A variational regularization of Abel transform for GPS radio occultation

    NASA Astrophysics Data System (ADS)

    Wee, Tae-Kwon

    2018-04-01

    In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.

  17. Why Open-Ended Survey Questions Are Unlikely to Support Rigorous Qualitative Insights.

    PubMed

    LaDonna, Kori A; Taylor, Taryn; Lingard, Lorelei

    2018-03-01

    Health professions education researchers are increasingly relying on a combination of quantitative and qualitative research methods to explore complex questions in the field. This important and necessary development, however, creates new methodological challenges that can affect both the rigor of the research process and the quality of the findings. One example is "qualitatively" analyzing free-text responses to survey or assessment instrument questions. In this Invited Commentary, the authors explain why analysis of such responses rarely meets the bar for rigorous qualitative research. While the authors do not discount the potential for free-text responses to enhance quantitative findings or to inspire new research questions, they caution that these responses rarely produce data rich enough to generate robust, stand-alone insights. The authors consider exemplars from health professions education research and propose strategies for treating free-text responses appropriately.

  18. IMPROVING ALTERNATIVES FOR ENVIRONMENTAL IMPACT ASSESSMENT. (R825758)

    EPA Science Inventory

    Environmental impact assessment (EIA), in the US, requires an objective and rigorous analysis of alternatives. Yet the choice of alternatives for that analysis can be subjective and arbitrary. Alternatives often reflect narrow project objectives, agency agendas, and predilecti...

  19. FORMAL SCENARIO DEVELOPMENT FOR ENVIRONMENTAL IMPACT ASSESSMENT STUDIES

    EPA Science Inventory

    Scenario analysis is a process of evaluating possible future events through the consideration of alternative plausible (though not equally likely) outcomes (scenarios). The analysis is designed to enable improved decision-making and assessment through a more rigorous evaluation o...

  20. Haplotypic Analysis of Wellcome Trust Case Control Consortium Data

    PubMed Central

    Browning, Brian L.; Browning, Sharon R.

    2008-01-01

    We applied a recently developed multilocus association testing method (localized haplotype clustering) to Wellcome Trust Case Control Consortium data (14,000 cases of seven common diseases and 3,000 shared controls genotyped on the Affymetrix 500K array). After rigorous data quality filtering, we identified three disease-associated loci with strong statistical support from localized haplotype cluster tests but with only marginal significance in single marker tests. These loci are chromosomes 10p15.1 with type 1 diabetes (p = 5.1 × 10-9), 12q15 with type 2 diabetes (p = 1.9 × 10-7) and 15q26.2 with hypertension (p = 2.8 × 10-8). We also detected the association of chromosome 9p21.3 with type 2 diabetes (p = 2.8 × 10-8), although this locus did not pass our stringent genotype quality filters. The association of 10p15.1 with type 1 diabetes and 9p21.3 with type 2 diabetes have both been replicated in other studies using independent data sets. Overall, localized haplotype cluster analysis had better success detecting disease associated variants than a previous single-marker analysis of imputed HapMap SNPs. We found that stringent application of quality score thresholds to genotype data substantially reduced false-positive results arising from genotype error. In addition, we demonstrate that it is possible to simultaneously phase 16,000 individuals genotyped on genome-wide data (450K markers) using the Beagle software package. PMID:18224336

  1. Statistical testing and power analysis for brain-wide association study.

    PubMed

    Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng

    2018-04-05

    The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. An operational retrieval algorithm for determining aerosol optical properties in the ultraviolet

    NASA Astrophysics Data System (ADS)

    Taylor, Thomas E.; L'Ecuyer, Tristan S.; Slusser, James R.; Stephens, Graeme L.; Goering, Christian D.

    2008-02-01

    This paper describes a number of practical considerations concerning the optimization and operational implementation of an algorithm used to characterize the optical properties of aerosols across part of the ultraviolet (UV) spectrum. The algorithm estimates values of aerosol optical depth (AOD) and aerosol single scattering albedo (SSA) at seven wavelengths in the UV, as well as total column ozone (TOC) and wavelength-independent asymmetry factor (g) using direct and diffuse irradiances measured with a UV multifilter rotating shadowband radiometer (UV-MFRSR). A novel method for cloud screening the irradiance data set is introduced, as well as several improvements and optimizations to the retrieval scheme which yield a more realistic physical model for the inversion and increase the efficiency of the algorithm. Introduction of a wavelength-dependent retrieval error budget generated from rigorous forward model analysis as well as broadened covariances on the a priori values of AOD, SSA and g and tightened covariances of TOC allows sufficient retrieval sensitivity and resolution to obtain unique solutions of aerosol optical properties as demonstrated by synthetic retrievals. Analysis of a cloud screened data set (May 2003) from Panther Junction, Texas, demonstrates that the algorithm produces realistic values of the optical properties that compare favorably with pseudo-independent methods for AOD, TOC and calculated Ångstrom exponents. Retrieval errors of all parameters (except TOC) are shown to be negatively correlated to AOD, while the Shannon information content is positively correlated, indicating that retrieval skill improves with increasing atmospheric turbidity. When implemented operationally on more than thirty instruments in the Ultraviolet Monitoring and Research Program's (UVMRP) network, this retrieval algorithm will provide a comprehensive and internally consistent climatology of ground-based aerosol properties in the UV spectral range that can be used for both validation of satellite measurements as well as regional aerosol and ultraviolet transmission studies.

  3. Controlled source electromagnetic data analysis with seismic constraints and rigorous uncertainty estimation in the Black Sea

    NASA Astrophysics Data System (ADS)

    Gehrmann, R. A. S.; Schwalenberg, K.; Hölz, S.; Zander, T.; Dettmer, J.; Bialas, J.

    2016-12-01

    In 2014 an interdisciplinary survey was conducted as part of the German SUGAR project in the Western Black Sea targeting gas hydrate occurrences in the Danube Delta. Marine controlled source electromagnetic (CSEM) data were acquired with an inline seafloor-towed array (BGR), and a two-polarization horizontal ocean-bottom source and receiver configuration (GEOMAR). The CSEM data are co-located with high-resolution 2-D and 3-D seismic reflection data (GEOMAR). We present results from 2-D regularized inversion (MARE2DEM by Kerry Key), which provides a smooth model of the electrical resistivity distribution beneath the source and multiple receivers. The 2-D approach includes seafloor topography and structural constraints from seismic data. We estimate uncertainties from the regularized inversion and compare them to 1-D Bayesian inversion results. The probabilistic inversion for a layered subsurface treats the parameter values and the number of layers as unknown by applying reversible-jump Markov-chain Monte Carlo sampling. A non-diagonal data covariance matrix obtained from residual error analysis accounts for correlated errors. The resulting resistivity models show generally high resistivity values between 3 and 10 Ωm on average which can be partly attributed to depleted pore water salinities due to sea-level low stands in the past, and locally up to 30 Ωm which is likely caused by gas hydrates. At the base of the gas hydrate stability zone resistivities rise up to more than 100 Ωm which could be due to gas hydrate as well as a layer of free gas underneath. However, the deeper parts also show the largest model parameter uncertainties. Archie's Law is used to derive estimates of the gas hydrate saturation, which vary between 30 and 80% within the anomalous layers considering salinity and porosity profiles from a distant DSDP bore hole.

  4. Development of rigor mortis is not affected by muscle volume.

    PubMed

    Kobayashi, M; Ikegaya, H; Takase, I; Hatanaka, K; Sakurada, K; Iwase, H

    2001-04-01

    There is a hypothesis suggesting that rigor mortis progresses more rapidly in small muscles than in large muscles. We measured rigor mortis as tension determined isometrically in rat musculus erector spinae that had been cut into muscle bundles of various volumes. The muscle volume did not influence either the progress or the resolution of rigor mortis, which contradicts the hypothesis. Differences in pre-rigor load on the muscles influenced the onset and resolution of rigor mortis in a few pairs of samples, but did not influence the time taken for rigor mortis to reach its full extent after death. Moreover, the progress of rigor mortis in this muscle was biphasic; this may reflect the early rigor of red muscle fibres and the late rigor of white muscle fibres.

  5. CSF analysis

    MedlinePlus

    ... A, Sancesario GM, Esposito Z, et al. Plasmin system of Alzheimer's disease: CSF analysis. J Neural Transm (Vienna) . ... urac.org). URAC's accreditation program is an independent audit to verify that A.D.A.M. follows rigorous standards of quality and accountability. A.D.A.M. is ...

  6. The quality of systematic reviews about interventions for refractive error can be improved: a review of systematic reviews.

    PubMed

    Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing

    2017-09-05

    Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for refractive error are low methodological quality. Following widely accepted guidance, such as Cochrane or Institute of Medicine standards for conducting systematic reviews, would contribute to improved patient care and inform future research.

  7. Calculating the sensitivity and robustness of binding free energy calculations to force field parameters

    PubMed Central

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.

    2013-01-01

    Binding free energy calculations offer a thermodynamically rigorous method to compute protein-ligand binding, and they depend on empirical force fields with hundreds of parameters. We examined the sensitivity of computed binding free energies to the ligand’s electrostatic and van der Waals parameters. Dielectric screening and cancellation of effects between ligand-protein and ligand-solvent interactions reduce the parameter sensitivity of binding affinity by 65%, compared with interaction strengths computed in the gas-phase. However, multiple changes to parameters combine additively on average, which can lead to large changes in overall affinity from many small changes to parameters. Using these results, we estimate that random, uncorrelated errors in force field nonbonded parameters must be smaller than 0.02 e per charge, 0.06 Å per radius, and 0.01 kcal/mol per well depth in order to obtain 68% (one standard deviation) confidence that a computed affinity for a moderately-sized lead compound will fall within 1 kcal/mol of the true affinity, if these are the only sources of error considered. PMID:24015114

  8. Deriving quantitative dynamics information for proteins and RNAs using ROTDIF with a graphical user interface.

    PubMed

    Berlin, Konstantin; Longhini, Andrew; Dayie, T Kwaku; Fushman, David

    2013-12-01

    To facilitate rigorous analysis of molecular motions in proteins, DNA, and RNA, we present a new version of ROTDIF, a program for determining the overall rotational diffusion tensor from single- or multiple-field nuclear magnetic resonance relaxation data. We introduce four major features that expand the program's versatility and usability. The first feature is the ability to analyze, separately or together, (13)C and/or (15)N relaxation data collected at a single or multiple fields. A significant improvement in the accuracy compared to direct analysis of R2/R1 ratios, especially critical for analysis of (13)C relaxation data, is achieved by subtracting high-frequency contributions to relaxation rates. The second new feature is an improved method for computing the rotational diffusion tensor in the presence of biased errors, such as large conformational exchange contributions, that significantly enhances the accuracy of the computation. The third new feature is the integration of the domain alignment and docking module for relaxation-based structure determination of multi-domain systems. Finally, to improve accessibility to all the program features, we introduced a graphical user interface that simplifies and speeds up the analysis of the data. Written in Java, the new ROTDIF can run on virtually any computer platform. In addition, the new ROTDIF achieves an order of magnitude speedup over the previous version by implementing a more efficient deterministic minimization algorithm. We not only demonstrate the improvement in accuracy and speed of the new algorithm for synthetic and experimental (13)C and (15)N relaxation data for several proteins and nucleic acids, but also show that careful analysis required especially for characterizing RNA dynamics allowed us to uncover subtle conformational changes in RNA as a function of temperature that were opaque to previous analysis.

  9. Rate Coefficient for the (4)Heμ + CH4 Reaction at 500 K: Comparison between Theory and Experiment.

    PubMed

    Arseneau, Donald J; Fleming, Donald G; Li, Yongle; Li, Jun; Suleimanov, Yury V; Guo, Hua

    2016-03-03

    The rate constant for the H atom abstraction reaction from methane by the muonic helium atom, Heμ + CH4 → HeμH + CH3, is reported at 500 K and compared with theory, providing an important test of both the potential energy surface (PES) and reaction rate theory for the prototypical polyatomic CH5 reaction system. The theory used to characterize this reaction includes both variational transition-state (CVT/μOMT) theory (VTST) and ring polymer molecular dynamics (RPMD) calculations on a recently developed PES, which are compared as well with earlier calculations on different PESs for the H, D, and Mu + CH4 reactions, the latter, in particular, providing for a variation in atomic mass by a factor of 36. Though rigorous quantum calculations have been carried out for the H + CH4 reaction, these have not yet been extended to the isotopologues of this reaction (in contrast to H3), so it is important to provide tests of less rigorous theories in comparison with kinetic isotope effects measured by experiment. In this regard, the agreement between the VTST and RPMD calculations and experiment for the rate constant of the Heμ + CH4 reaction at 500 K is excellent, within 10% in both cases, which overlaps with experimental error.

  10. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  11. Enabling quaternion derivatives: the generalized HR calculus

    PubMed Central

    Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C.; Mandic, Danilo P.

    2015-01-01

    Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis. PMID:26361555

  12. Enabling quaternion derivatives: the generalized HR calculus.

    PubMed

    Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C; Mandic, Danilo P

    2015-08-01

    Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis.

  13. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  14. Automated Design Space Exploration with Aspen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spafford, Kyle L.; Vetter, Jeffrey S.

    Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less

  15. Automated Design Space Exploration with Aspen

    DOE PAGES

    Spafford, Kyle L.; Vetter, Jeffrey S.

    2015-01-01

    Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less

  16. Digital coherent receiver based transmitter penalty characterization.

    PubMed

    Geisler, David J; Kaufmann, John E

    2016-12-26

    For optical communications links where receivers are signal-power-starved, such as through free-space, it is important to design transmitters and receivers that can operate as close as practically possible to theoretical limits. A total system penalty is typically assessed in terms of how far the end-to-end bit-error rate (BER) is from these limits. It is desirable, but usually difficult, to determine the division of this penalty between the transmitter and receiver. This paper describes a new rigorous and computationally based method that isolates which portion of the penalty can be assessed against the transmitter. There are two basic parts to this approach: (1) use of a coherent optical receiver to perform frequency down-conversion of a transmitter's optical signal waveform to the electrical domain, preserving both optical field amplitude and phase information, and (2): software-based analysis of the digitized electrical waveform. The result is a single numerical metric that quantifies how close a transmitter's signal waveform is to the ideal, based on its BER performance with a perfect software-defined matched-filter receiver demodulator. A detailed description of applying the proposed methodology to the waveform characterization of an optical burst-mode differential phase-shifted keying (DPSK) transmitter is experimentally demonstrated.

  17. Full-dimensional quantum calculations of the dissociation energy, zero-point, and 10 K properties of H7+/D7+ clusters using an ab initio potential energy surface.

    PubMed

    Barragán, Patricia; Pérez de Tudela, Ricardo; Qu, Chen; Prosmiti, Rita; Bowman, Joel M

    2013-07-14

    Diffusion Monte Carlo (DMC) and path-integral Monte Carlo computations of the vibrational ground state and 10 K equilibrium state properties of the H7 (+)/D7 (+) cations are presented, using an ab initio full-dimensional potential energy surface. The DMC zero-point energies of dissociated fragments H5 (+)(D5 (+))+H2(D2) are also calculated and from these results and the electronic dissociation energy, dissociation energies, D0, of 752 ± 15 and 980 ± 14 cm(-1) are reported for H7 (+) and D7 (+), respectively. Due to the known error in the electronic dissociation energy of the potential surface, these quantities are underestimated by roughly 65 cm(-1). These values are rigorously determined for first time, and compared with previous theoretical estimates from electronic structure calculations using standard harmonic analysis, and available experimental measurements. Probability density distributions are also computed for the ground vibrational and 10 K state of H7 (+) and D7 (+). These are qualitatively described as a central H3 (+)/D3 (+) core surrounded by "solvent" H2/D2 molecules that nearly freely rotate.

  18. Filtering Meteoroid Flights Using Multiple Unscented Kalman Filters

    NASA Astrophysics Data System (ADS)

    Sansom, E. K.; Bland, P. A.; Rutten, M. G.; Paxman, J.; Towner, M. C.

    2016-11-01

    Estimator algorithms are immensely versatile and powerful tools that can be applied to any problem where a dynamic system can be modeled by a set of equations and where observations are available. A well designed estimator enables system states to be optimally predicted and errors to be rigorously quantified. Unscented Kalman filters (UKFs) and interactive multiple models can be found in methods from satellite tracking to self-driving cars. The luminous trajectory of the Bunburra Rockhole fireball was observed by the Desert Fireball Network in mid-2007. The recorded data set is used in this paper to examine the application of these two techniques as a viable approach to characterizing fireball dynamics. The nonlinear, single-body system of equations, used to model meteoroid entry through the atmosphere, is challenged by gross fragmentation events that may occur. The incorporation of the UKF within an interactive multiple model smoother provides a likely solution for when fragmentation events may occur as well as providing a statistical analysis of the state uncertainties. In addition to these benefits, another advantage of this approach is its automatability for use within an image processing pipeline to facilitate large fireball data analyses and meteorite recoveries.

  19. Rate dependent direct inverse hysteresis compensation of piezoelectric micro-actuator used in dual-stage hard disk drive head positioning system.

    PubMed

    Rahman, Md Arifur; Al Mamun, Abdullah; Yao, Kui

    2015-08-01

    The head positioning servo system in hard disk drive is implemented nowadays using a dual-stage actuator—the primary stage consisting of a voice coil motor actuator providing long range motion and the secondary stage controlling the position of the read/write head with fine resolution. Piezoelectric micro-actuator made of lead zirconate titanate (PZT) has been a popular choice for the secondary stage. However, PZT micro-actuator exhibits hysteresis—an inherent nonlinear characteristic of piezoelectric material. The advantage expected from using the secondary micro-actuator is somewhat lost by the hysteresis of the micro-actuator that contributes to tracking error. Hysteresis nonlinearity adversely affects the performance and, if not compensated, may cause inaccuracy and oscillation in the response. Compensation of hysteresis is therefore an important aspect for designing head-positioning servo system. This paper presents a new rate dependent model of hysteresis along with rigorous analysis and identification of the model. Parameters of the model are found using particle swarm optimization. Direct inverse of the proposed rate-dependent generalized Prandtl-Ishlinskii model is used as the hysteresis compensator. Effectiveness of the overall solution is underscored through experimental results.

  20. Ten years on: a follow-up review of ERP research in attention-deficit/hyperactivity disorder.

    PubMed

    Johnstone, Stuart J; Barry, Robert J; Clarke, Adam R

    2013-04-01

    This article reviews the event-related potential (ERP) literature in relation to attention-deficit/hyperactivity disorder (AD/HD) over the years 2002-2012. ERP studies exploring various aspects of brain functioning in children and adolescents with AD/HD are reviewed, with a focus on group effects and interpretations in the domains of attention, inhibitory control, performance monitoring, non-pharmacological treatments, and ERP/energetics interactions. There has been a distinct shift in research intensity over the past 10 years, with a large increase in ERP studies conducted in the areas of inhibitory control and performance monitoring. Overall, the research has identified a substantial number of ERP correlates of AD/HD. Robust differences from healthy controls have been reported in early orienting, inhibitory control, and error-processing components. These data offer potential to improve our understanding of the specific brain dysfunction(s) which contribute to the disorder. The literature would benefit from a more rigorous approach to clinical group composition and consideration of age effects, as well as increased emphasis on replication and extension studies using exacting participant, task, and analysis parameters. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  1. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    NASA Technical Reports Server (NTRS)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  2. Exploring the business case for ambulatory electronic health record system adoption.

    PubMed

    Song, Paula H; McAlearney, Ann Scheck; Robbins, Julie; McCullough, Jeffrey S

    2011-01-01

    Widespread implementation and use of electronic health record (EHR) systems has been recognized by healthcare leaders as a cornerstone strategy for systematically reducing medical errors and improving clinical quality. However, EHR adoption requires a significant capital investment for healthcare providers, and cost is often cited as a barrier. Despite the capital requirements, a true business case for EHR system adoption and implementation has not been made. This is of concern, as the lack of a business case can influence decision making about EHR investments. The purpose of this study was to examine the role of business case analysis in healthcare organizations' decisions to invest in ambulatory EHR systems, and to identify what factors organizations considered when justifying an ambulatory EHR. Using a qualitative case study approach, we explored how five organizations that are considered to have best practices in ambulatory EHR system implementation had evaluated the business case for EHR adoption. We found that although the rigor of formal business case analysis was highly variable, informants across these organizations consistently reported perceiving that a positive business case for EHR system adoption existed, especially when they considered both financial and non-financial benefits. While many consider EHR system adoption inevitable in healthcare, this viewpoint should not deter managers from conducting a business case analysis. Results of such an analysis can inform healthcare organizations' understanding about resource allocation needs, help clarify expectations about financial and clinical performance metrics to be monitored through EHR systems, and form the basis for ongoing organizational support to ensure successful system implementation.

  3. Tree crown mapping in managed woodlands (parklands) of semi-arid West Africa using WorldView-2 imagery and geographic object based image analysis.

    PubMed

    Karlson, Martin; Reese, Heather; Ostwald, Madelene

    2014-11-28

    Detailed information on tree cover structure is critical for research and monitoring programs targeting African woodlands, including agroforestry parklands. High spatial resolution satellite imagery represents a potentially effective alternative to field-based surveys, but requires the development of accurate methods to automate information extraction. This study presents a method for tree crown mapping based on Geographic Object Based Image Analysis (GEOBIA) that use spectral and geometric information to detect and delineate individual tree crowns and crown clusters. The method was implemented on a WorldView-2 image acquired over the parklands of Saponé, Burkina Faso, and rigorously evaluated against field reference data. The overall detection rate was 85.4% for individual tree crowns and crown clusters, with lower accuracies in areas with high tree density and dense understory vegetation. The overall delineation error (expressed as the difference between area of delineated object and crown area measured in the field) was 45.6% for individual tree crowns and 61.5% for crown clusters. Delineation accuracies were higher for medium (35-100 m(2)) and large (≥100 m(2)) trees compared to small (<35 m(2)) trees. The results indicate potential of GEOBIA and WorldView-2 imagery for tree crown mapping in parkland landscapes and similar woodland areas.

  4. Tree Crown Mapping in Managed Woodlands (Parklands) of Semi-Arid West Africa Using WorldView-2 Imagery and Geographic Object Based Image Analysis

    PubMed Central

    Karlson, Martin; Reese, Heather; Ostwald, Madelene

    2014-01-01

    Detailed information on tree cover structure is critical for research and monitoring programs targeting African woodlands, including agroforestry parklands. High spatial resolution satellite imagery represents a potentially effective alternative to field-based surveys, but requires the development of accurate methods to automate information extraction. This study presents a method for tree crown mapping based on Geographic Object Based Image Analysis (GEOBIA) that use spectral and geometric information to detect and delineate individual tree crowns and crown clusters. The method was implemented on a WorldView-2 image acquired over the parklands of Saponé, Burkina Faso, and rigorously evaluated against field reference data. The overall detection rate was 85.4% for individual tree crowns and crown clusters, with lower accuracies in areas with high tree density and dense understory vegetation. The overall delineation error (expressed as the difference between area of delineated object and crown area measured in the field) was 45.6% for individual tree crowns and 61.5% for crown clusters. Delineation accuracies were higher for medium (35–100 m2) and large (≥100 m2) trees compared to small (<35 m2) trees. The results indicate potential of GEOBIA and WorldView-2 imagery for tree crown mapping in parkland landscapes and similar woodland areas. PMID:25460815

  5. A Behavioral Model of Landscape Change in the Amazon Basin: The Colonist Case

    NASA Technical Reports Server (NTRS)

    Walker, R. A.; Drzyzga, S. A.; Li, Y. L.; Wi, J. G.; Caldas, M.; Arima, E.; Vergara, D.

    2004-01-01

    This paper presents the prototype of a predictive model capable of describing both magnitudes of deforestation and its spatial articulation into patterns of forest fragmentation. In a departure from other landscape models, it establishes an explicit behavioral foundation for algorithm development, predicated on notions of the peasant economy and on household production theory. It takes a 'bottom-up' approach, generating the process of land-cover change occurring at lot level together with the geography of a transportation system to describe regional landscape change. In other words, it translates the decentralized decisions of individual households into a collective, spatial impact. In so doing, the model unites the richness of survey research on farm households with the analytical rigor of spatial analysis enabled by geographic information systems (GIs). The paper describes earlier efforts at spatial modeling, provides a critique of the so-called spatially explicit model, and elaborates a behavioral foundation by considering farm practices of colonists in the Amazon basin. It then uses, insight from the behavioral statement to motivate a GIs-based model architecture. The model is implemented for a long-standing colonization frontier in the eastern sector of the basin, along the Trans-Amazon Highway in the State of Para, Brazil. Results are subjected to both sensitivity analysis and error assessment, and suggestions are made about how the model could be improved.

  6. Uncertainty quantification for nuclear density functional theory and information content of new measurements.

    PubMed

    McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-03-27

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  7. Intranasal Pharmacokinetic Data for Triptans Such as Sumatriptan and Zolmitriptan Can Render Area Under the Curve (AUC) Predictions for the Oral Route: Strategy Development and Application.

    PubMed

    Srinivas, Nuggehally R; Syed, Muzeeb

    2016-01-01

    Limited pharmacokinetic sampling strategy may be useful for predicting the area under the curve (AUC) for triptans and may have clinical utility as a prospective tool for prediction. Using appropriate intranasal pharmacokinetic data, a Cmax vs. AUC relationship was established by linear regression models for sumatriptan and zolmitriptan. The predictions of the AUC values were performed using published mean/median Cmax data and appropriate regression lines. The quotient of observed and predicted values rendered fold-difference calculation. The mean absolute error (MAE), mean positive error (MPE), mean negative error (MNE), root mean square error (RMSE), correlation coefficient (r), and the goodness of the AUC fold prediction were used to evaluate the two triptans. Also, data from the mean concentration profiles at time points of 1 hour (sumatriptan) and 3 hours (zolmitriptan) were used for the AUC prediction. The Cmax vs. AUC models displayed excellent correlation for both sumatriptan (r = .9997; P < .001) and zolmitriptan (r = .9999; P < .001). Irrespective of the two triptans, the majority of the predicted AUCs (83%-85%) were within 0.76-1.25-fold difference using the regression model. The prediction of AUC values for sumatriptan or zolmitriptan using the concentration data that reflected the Tmax occurrence were in the proximity of the reported values. In summary, the Cmax vs. AUC models exhibited strong correlations for sumatriptan and zolmitriptan. The usefulness of the prediction of the AUC values was established by a rigorous statistical approach.

  8. Quality and rigor of the concept mapping methodology: a pooled study analysis.

    PubMed

    Rosas, Scott R; Kane, Mary

    2012-05-01

    The use of concept mapping in research and evaluation has expanded dramatically over the past 20 years. Researchers in academic, organizational, and community-based settings have applied concept mapping successfully without the benefit of systematic analyses across studies to identify the features of a methodologically sound study. Quantitative characteristics and estimates of quality and rigor that may guide for future studies are lacking. To address this gap, we conducted a pooled analysis of 69 concept mapping studies to describe characteristics across study phases, generate specific indicators of validity and reliability, and examine the relationship between select study characteristics and quality indicators. Individual study characteristics and estimates were pooled and quantitatively summarized, describing the distribution, variation and parameters for each. In addition, variation in the concept mapping data collection in relation to characteristics and estimates was examined. Overall, results suggest concept mapping yields strong internal representational validity and very strong sorting and rating reliability estimates. Validity and reliability were consistently high despite variation in participation and task completion percentages across data collection modes. The implications of these findings as a practical reference to assess the quality and rigor for future concept mapping studies are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Using qualitative mixed methods to study small health care organizations while maximising trustworthiness and authenticity.

    PubMed

    Phillips, Christine B; Dwan, Kathryn; Hepworth, Julie; Pearce, Christopher; Hall, Sally

    2014-11-19

    The primary health care sector delivers the majority of health care in western countries through small, community-based organizations. However, research into these healthcare organizations is limited by the time constraints and pressure facing them, and the concern by staff that research is peripheral to their work. We developed Q-RARA-Qualitative Rapid Appraisal, Rigorous Analysis-to study small, primary health care organizations in a way that is efficient, acceptable to participants and methodologically rigorous. Q-RARA comprises a site visit, semi-structured interviews, structured and unstructured observations, photographs, floor plans, and social scanning data. Data were collected over the course of one day per site and the qualitative analysis was integrated and iterative. We found Q-RARA to be acceptable to participants and effective in collecting data on organizational function in multiple sites without disrupting the practice, while maintaining a balance between speed and trustworthiness. The Q-RARA approach is capable of providing a richly textured, rigorous understanding of the processes of the primary care practice while also allowing researchers to develop an organizational perspective. For these reasons the approach is recommended for use in small-scale organizations both within and outside the primary health care sector.

  10. Preserving pre-rigor meat functionality for beef patty production.

    PubMed

    Claus, J R; Sørheim, O

    2006-06-01

    Three methods were examined for preserving pre-rigor meat functionality in beef patties. Hot-boned semimembranosus muscles were processed as follows: (1) pre-rigor ground, salted, patties immediately cooked; (2) pre-rigor ground, salted and stored overnight; (3) pre-rigor injected with brine; and (4) post-rigor ground and salted. Raw patties contained 60% lean beef, 19.7% beef fat trim, 1.7% NaCl, 3.6% starch, and 15% water. Pre-rigor processing occurred at 3-3.5h postmortem. Patties made from pre-rigor ground meat had higher pH values; greater protein solubility; firmer, more cohesive, and chewier texture; and substantially lower cooking losses than the other treatments. Addition of salt was sufficient to reduce the rate and extent of glycolysis. Brine injection of intact pre-rigor muscles resulted in some preservation of the functional properties but not as pronounced as with salt addition to pre-rigor ground meat.

  11. Rigorous Performance Evaluation of Smartphone GNSS/IMU Sensors for ITS Applications

    PubMed Central

    Gikas, Vassilis; Perakis, Harris

    2016-01-01

    With the rapid growth in smartphone technologies and improvement in their navigation sensors, an increasing amount of location information is now available, opening the road to the provision of new Intelligent Transportation System (ITS) services. Current smartphone devices embody miniaturized Global Navigation Satellite System (GNSS), Inertial Measurement Unit (IMU) and other sensors capable of providing user position, velocity and attitude. However, it is hard to characterize their actual positioning and navigation performance capabilities due to the disparate sensor and software technologies adopted among manufacturers and the high influence of environmental conditions, and therefore, a unified certification process is missing. This paper presents the analysis results obtained from the assessment of two modern smartphones regarding their positioning accuracy (i.e., precision and trueness) capabilities (i.e., potential and limitations) based on a practical but rigorous methodological approach. Our investigation relies on the results of several vehicle tracking (i.e., cruising and maneuvering) tests realized through comparing smartphone obtained trajectories and kinematic parameters to those derived using a high-end GNSS/IMU system and advanced filtering techniques. Performance testing is undertaken for the HTC One S (Android) and iPhone 5s (iOS). Our findings indicate that the deviation of the smartphone locations from ground truth (trueness) deteriorates by a factor of two in obscured environments compared to those derived in open sky conditions. Moreover, it appears that iPhone 5s produces relatively smaller and less dispersed error values compared to those computed for HTC One S. Also, the navigation solution of the HTC One S appears to adapt faster to changes in environmental conditions, suggesting a somewhat different data filtering approach for the iPhone 5s. Testing the accuracy of the accelerometer and gyroscope sensors for a number of maneuvering (speeding, turning, etc.,) events reveals high consistency between smartphones, whereas the small deviations from ground truth verify their high potential even for critical ITS safety applications. PMID:27527187

  12. Rigorous Performance Evaluation of Smartphone GNSS/IMU Sensors for ITS Applications.

    PubMed

    Gikas, Vassilis; Perakis, Harris

    2016-08-05

    With the rapid growth in smartphone technologies and improvement in their navigation sensors, an increasing amount of location information is now available, opening the road to the provision of new Intelligent Transportation System (ITS) services. Current smartphone devices embody miniaturized Global Navigation Satellite System (GNSS), Inertial Measurement Unit (IMU) and other sensors capable of providing user position, velocity and attitude. However, it is hard to characterize their actual positioning and navigation performance capabilities due to the disparate sensor and software technologies adopted among manufacturers and the high influence of environmental conditions, and therefore, a unified certification process is missing. This paper presents the analysis results obtained from the assessment of two modern smartphones regarding their positioning accuracy (i.e., precision and trueness) capabilities (i.e., potential and limitations) based on a practical but rigorous methodological approach. Our investigation relies on the results of several vehicle tracking (i.e., cruising and maneuvering) tests realized through comparing smartphone obtained trajectories and kinematic parameters to those derived using a high-end GNSS/IMU system and advanced filtering techniques. Performance testing is undertaken for the HTC One S (Android) and iPhone 5s (iOS). Our findings indicate that the deviation of the smartphone locations from ground truth (trueness) deteriorates by a factor of two in obscured environments compared to those derived in open sky conditions. Moreover, it appears that iPhone 5s produces relatively smaller and less dispersed error values compared to those computed for HTC One S. Also, the navigation solution of the HTC One S appears to adapt faster to changes in environmental conditions, suggesting a somewhat different data filtering approach for the iPhone 5s. Testing the accuracy of the accelerometer and gyroscope sensors for a number of maneuvering (speeding, turning, etc.,) events reveals high consistency between smartphones, whereas the small deviations from ground truth verify their high potential even for critical ITS safety applications.

  13. Arnold diffusion in the planar elliptic restricted three-body problem: mechanism and numerical verification

    NASA Astrophysics Data System (ADS)

    Capiński, Maciej J.; Gidea, Marian; de la Llave, Rafael

    2017-01-01

    We present a diffusion mechanism for time-dependent perturbations of autonomous Hamiltonian systems introduced in Gidea (2014 arXiv:1405.0866). This mechanism is based on shadowing of pseudo-orbits generated by two dynamics: an ‘outer dynamics’, given by homoclinic trajectories to a normally hyperbolic invariant manifold, and an ‘inner dynamics’, given by the restriction to that manifold. On the inner dynamics the only assumption is that it preserves area. Unlike other approaches, Gidea (2014 arXiv:1405.0866) does not rely on the KAM theory and/or Aubry-Mather theory to establish the existence of diffusion. Moreover, it does not require to check twist conditions or non-degeneracy conditions near resonances. The conditions are explicit and can be checked by finite precision calculations in concrete systems (roughly, they amount to checking that Melnikov-type integrals do not vanish and that some manifolds are transversal). As an application, we study the planar elliptic restricted three-body problem. We present a rigorous theorem that shows that if some concrete calculations yield a non zero value, then for any sufficiently small, positive value of the eccentricity of the orbits of the main bodies, there are orbits of the infinitesimal body that exhibit a change of energy that is bigger than some fixed number, which is independent of the eccentricity. We verify numerically these calculations for values of the masses close to that of the Jupiter/Sun system. The numerical calculations are not completely rigorous, because we ignore issues of round-off error and do not estimate the truncations, but they are not delicate at all by the standard of numerical analysis. (Standard tests indicate that we get 7 or 8 figures of accuracy where 1 would be enough.) The code of these verifications is available. We hope that some full computer assisted proofs will be obtained in the near future since there are packages (CAPD) designed for problems of this type.

  14. Iterative inversion of deformation vector fields with feedback control.

    PubMed

    Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei

    2018-05-14

    Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.

  15. Multimorbidity and Patient Safety Incidents in Primary Care: A Systematic Review and Meta-Analysis

    PubMed Central

    Panagioti, Maria; Stokes, Jonathan; Esmail, Aneez; Coventry, Peter; Cheraghi-Sohi, Sudeh; Alam, Rahul; Bower, Peter

    2015-01-01

    Background Multimorbidity is increasingly prevalent and represents a major challenge in primary care. Patients with multimorbidity are potentially more likely to experience safety incidents due to the complexity of their needs and frequency of their interactions with health services. However, rigorous syntheses of the link between patient safety incidents and multimorbidity are not available. This review examined the relationship between multimorbidity and patient safety incidents in primary care. Methods We followed our published protocol (PROSPERO registration number: CRD42014007434). Medline, Embase and CINAHL were searched up to May 2015. Study design and quality were assessed. Odds ratios (OR) and 95% confidence intervals (95% CIs) were calculated for the associations between multimorbidity and two categories of patient safety outcomes: ‘active patient safety incidents’ (such as adverse drug events and medical complications) and ‘precursors of safety incidents’ (such as prescription errors, medication non-adherence, poor quality of care and diagnostic errors). Meta-analyses using random effects models were undertaken. Results Eighty six relevant comparisons from 75 studies were included in the analysis. Meta-analysis demonstrated that physical-mental multimorbidity was associated with an increased risk for ‘active patient safety incidents’ (OR = 2.39, 95% CI = 1.40 to 3.38) and ‘precursors of safety incidents’ (OR = 1.69, 95% CI = 1.36 to 2.03). Physical multimorbidity was associated with an increased risk for active safety incidents (OR = 1.63, 95% CI = 1.45 to 1.80) but was not associated with precursors of safety incidents (OR = 1.02, 95% CI = 0.90 to 1.13). Statistical heterogeneity was high and the methodological quality of the studies was generally low. Conclusions The association between multimorbidity and patient safety is complex, and varies by type of multimorbidity and type of safety incident. Our analyses suggest that multimorbidity involving mental health may be a key driver of safety incidents, which has important implication for the design and targeting of interventions to improve safety. High quality studies examining the mechanisms of patient safety incidents in patients with multimorbidity are needed, with the goal of promoting effective service delivery and ameliorating threats to safety in this group of patients. PMID:26317435

  16. Evaluation of NMME temperature and precipitation bias and forecast skill for South Asia

    NASA Astrophysics Data System (ADS)

    Cash, Benjamin A.; Manganello, Julia V.; Kinter, James L.

    2017-08-01

    Systematic error and forecast skill for temperature and precipitation in two regions of Southern Asia are investigated using hindcasts initialized May 1 from the North American Multi-Model Ensemble. We focus on two contiguous but geographically and dynamically diverse regions: the Extended Indian Monsoon Rainfall (70-100E, 10-30 N) and the nearby mountainous area of Pakistan and Afghanistan (60-75E, 23-39 N). Forecast skill is assessed using the Sign test framework, a rigorous statistical method that can be applied to non-Gaussian variables such as precipitation and to different ensemble sizes without introducing bias. We find that models show significant systematic error in both precipitation and temperature for both regions. The multi-model ensemble mean (MMEM) consistently yields the lowest systematic error and the highest forecast skill for both regions and variables. However, we also find that the MMEM consistently provides a statistically significant increase in skill over climatology only in the first month of the forecast. While the MMEM tends to provide higher overall skill than climatology later in the forecast, the differences are not significant at the 95% level. We also find that MMEMs constructed with a relatively small number of ensemble members per model can equal or outperform MMEMs constructed with more members in skill. This suggests some ensemble members either provide no contribution to overall skill or even detract from it.

  17. Reduced-cost second-order algebraic-diagrammatic construction method for excitation energies and transition moments

    NASA Astrophysics Data System (ADS)

    Mester, Dávid; Nagy, Péter R.; Kállay, Mihály

    2018-03-01

    A reduced-cost implementation of the second-order algebraic-diagrammatic construction [ADC(2)] method is presented. We introduce approximations by restricting virtual natural orbitals and natural auxiliary functions, which results, on average, in more than an order of magnitude speedup compared to conventional, density-fitting ADC(2) algorithms. The present scheme is the successor of our previous approach [D. Mester, P. R. Nagy, and M. Kállay, J. Chem. Phys. 146, 194102 (2017)], which has been successfully applied to obtain singlet excitation energies with the linear-response second-order coupled-cluster singles and doubles model. Here we report further methodological improvements and the extension of the method to compute singlet and triplet ADC(2) excitation energies and transition moments. The various approximations are carefully benchmarked, and conservative truncation thresholds are selected which guarantee errors much smaller than the intrinsic error of the ADC(2) method. Using the canonical values as reference, we find that the mean absolute error for both singlet and triplet ADC(2) excitation energies is 0.02 eV, while that for oscillator strengths is 0.001 a.u. The rigorous cutoff parameters together with the significantly reduced operation count and storage requirements allow us to obtain accurate ADC(2) excitation energies and transition properties using triple-ζ basis sets for systems of up to one hundred atoms.

  18. The Role of Data Analysis Software in Graduate Programs in Education and Post-Graduate Research

    ERIC Educational Resources Information Center

    Harwell, Michael

    2018-01-01

    The importance of data analysis software in graduate programs in education and post-graduate educational research is self-evident. However the role of this software in facilitating supererogated statistical practice versus "cookbookery" is unclear. The need to rigorously document the role of data analysis software in students' graduate…

  19. Using Content Analysis to Examine the Verbal or Written Communication of Stakeholders within Early Intervention.

    ERIC Educational Resources Information Center

    Johnson, Lawrence J.; LaMontagne, M. J.

    1993-01-01

    This paper describes content analysis as a data analysis technique useful for examining written or verbal communication within early intervention. The article outlines the use of referential or thematic recording units derived from interview data, identifies procedural guidelines, and addresses issues of rigor and validity. (Author/JDD)

  20. Integrated Sensitivity Analysis Workflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman-Hill, Ernest J.; Hoffman, Edward L.; Gibson, Marcus J.

    2014-08-01

    Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.

  1. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximatedmore » Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This enables ROMs to be rigorously incorporated in uncertainty-quantification settings, as the error model can be treated as a source of epistemic uncertainty. This work was completed as part of a Truman Fellowship appointment. We note that much additional work was performed as part of the Fellowship. One salient project is the development of the Trilinos-based model-reduction software module Razor , which is currently bundled with the Albany PDE code and currently allows nonlinear reduced-order models to be constructed for any application supported in Albany. Other important projects include the following: 1. ROMES-equipped ROMs for Bayesian inference: K. Carlberg, M. Drohmann, F. Lu (Lawrence Berkeley National Laboratory), M. Morzfeld (Lawrence Berkeley National Laboratory). 2. ROM-enabled Krylov-subspace recycling: K. Carlberg, V. Forstall (University of Maryland), P. Tsuji, R. Tuminaro. 3. A pseudo balanced POD method using only dual snapshots: K. Carlberg, M. Sarovar. 4. An analysis of discrete v. continuous optimality in nonlinear model reduction: K. Carlberg, M. Barone, H. Antil (George Mason University). Journal articles for these projects are in progress at the time of this writing.« less

  2. On the assessment of the added value of new predictive biomarkers.

    PubMed

    Chen, Weijie; Samuelson, Frank W; Gallas, Brandon D; Kang, Le; Sahiner, Berkman; Petrick, Nicholas

    2013-07-29

    The surge in biomarker development calls for research on statistical evaluation methodology to rigorously assess emerging biomarkers and classification models. Recently, several authors reported the puzzling observation that, in assessing the added value of new biomarkers to existing ones in a logistic regression model, statistical significance of new predictor variables does not necessarily translate into a statistically significant increase in the area under the ROC curve (AUC). Vickers et al. concluded that this inconsistency is because AUC "has vastly inferior statistical properties," i.e., it is extremely conservative. This statement is based on simulations that misuse the DeLong et al. method. Our purpose is to provide a fair comparison of the likelihood ratio (LR) test and the Wald test versus diagnostic accuracy (AUC) tests. We present a test to compare ideal AUCs of nested linear discriminant functions via an F test. We compare it with the LR test and the Wald test for the logistic regression model. The null hypotheses of these three tests are equivalent; however, the F test is an exact test whereas the LR test and the Wald test are asymptotic tests. Our simulation shows that the F test has the nominal type I error even with a small sample size. Our results also indicate that the LR test and the Wald test have inflated type I errors when the sample size is small, while the type I error converges to the nominal value asymptotically with increasing sample size as expected. We further show that the DeLong et al. method tests a different hypothesis and has the nominal type I error when it is used within its designed scope. Finally, we summarize the pros and cons of all four methods we consider in this paper. We show that there is nothing inherently less powerful or disagreeable about ROC analysis for showing the usefulness of new biomarkers or characterizing the performance of classification models. Each statistical method for assessing biomarkers and classification models has its own strengths and weaknesses. Investigators need to choose methods based on the assessment purpose, the biomarker development phase at which the assessment is being performed, the available patient data, and the validity of assumptions behind the methodologies.

  3. Optical scatterometry of quarter-micron patterns using neural regression

    NASA Astrophysics Data System (ADS)

    Bischoff, Joerg; Bauer, Joachim J.; Haak, Ulrich; Hutschenreuther, Lutz; Truckenbrodt, Horst

    1998-06-01

    With shrinking dimensions and increasing chip areas, a rapid and non-destructive full wafer characterization after every patterning cycle is an inevitable necessity. In former publications it was shown that Optical Scatterometry (OS) has the potential to push the attainable feature limits of optical techniques from 0.8 . . . 0.5 microns for imaging methods down to 0.1 micron and below. Thus the demands of future metrology can be met. Basically being a nonimaging method, OS combines light scatter (or diffraction) measurements with modern data analysis schemes to solve the inverse scatter issue. For very fine patterns with lambda-to-pitch ratios grater than one, the specular reflected light versus the incidence angle is recorded. Usually, the data analysis comprises two steps -- a training cycle connected the a rigorous forward modeling and the prediction itself. Until now, two data analysis schemes are usually applied -- the multivariate regression based Partial Least Squares method (PLS) and a look-up-table technique which is also referred to as Minimum Mean Square Error approach (MMSE). Both methods are afflicted with serious drawbacks. On the one hand, the prediction accuracy of multivariate regression schemes degrades with larger parameter ranges due to the linearization properties of the method. On the other hand, look-up-table methods are rather time consuming during prediction thus prolonging the processing time and reducing the throughput. An alternate method is an Artificial Neural Network (ANN) based regression which combines the advantages of multivariate regression and MMSE. Due to the versatility of a neural network, not only can its structure be adapted more properly to the scatter problem, but also the nonlinearity of the neuronal transfer functions mimic the nonlinear behavior of optical diffraction processes more adequately. In spite of these pleasant properties, the prediction speed of ANN regression is comparable with that of the PLS-method. In this paper, the viability and performance of ANN-regression will be demonstrated with the example of sub-quarter-micron resist metrology. To this end, 0.25 micrometer line/space patterns have been printed in positive photoresist by means of DUV projection lithography. In order to evaluate the total metrology chain from light scatter measurement through data analysis, a thorough modeling has been performed. Assuming a trapezoidal shape of the developed resist profile, a training data set was generated by means of the Rigorous Coupled Wave Approach (RCWA). After training the model, a second data set was computed and deteriorated by Gaussian noise to imitate real measuring conditions. Then, these data have been fed into the models established before resulting in a Standard Error of Prediction (SEP) which corresponds to the measuring accuracy. Even with putting only little effort in the design of a back-propagation network, the ANN is clearly superior to the PLS-method. Depending on whether a network with one or two hidden layers was used, accuracy gains between 2 and 5 can be achieved compared with PLS regression. Furthermore, the ANN is less noise sensitive, for there is only a doubling of the SEP at 5% noise for ANN whereas for PLS the accuracy degrades rapidly with increasing noise. The accuracy gain also depends on the light polarization and on the measured parameters. Finally, these results have been proven experimentally, where the OS-results are in good accordance with the profiles obtained from cross- sectioning micrographs.

  4. 42 CFR 431.992 - Corrective action plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...

  5. 42 CFR 431.992 - Corrective action plan.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...

  6. A Meta-Analysis of Single-Subject Research on Behavioral Momentum to Enhance Success in Students with Autism.

    PubMed

    Cowan, Richard J; Abel, Leah; Candel, Lindsay

    2017-05-01

    We conducted a meta-analysis of single-subject research studies investigating the effectiveness of antecedent strategies grounded in behavioral momentum for improving compliance and on-task performance for students with autism. First, we assessed the research rigor of those studies meeting our inclusionary criteria. Next, in order to apply a universal metric to help determine the effectiveness of this category of antecedent strategies investigated via single-subject research methods, we calculated effect sizes via omnibus improvement rate differences (IRDs). Outcomes provide additional support for behavioral momentum, especially interventions incorporating the high-probability command sequence. Implications for research and practice are discussed, including the consideration of how single-subject research is systematically reviewed to assess the rigor of studies and assist in determining overall intervention effectiveness .

  7. A Research Communication Brief: Gluten Analysis in Beef Samples Collected Using a Rigorous, Nationally Representative Sampling Protocol Confirms That Grain-Finished Beef Is Naturally Gluten-Free.

    PubMed

    McNeill, Shalene H; Cifelli, Amy M; Roseland, Janet M; Belk, Keith E; Woerner, Dale R; Gehring, Kerri B; Savell, Jeffrey W; Brooks, J Chance; Thompson, Leslie D

    2017-08-25

    Knowing whether or not a food contains gluten is vital for the growing number of individuals with celiac disease and non-celiac gluten sensitivity. Questions have recently been raised about whether beef from conventionally-raised, grain-finished cattle may contain gluten. To date, basic principles of ruminant digestion have been cited in support of the prevailing expert opinion that beef is inherently gluten-free. For this study, gluten analysis was conducted in beef samples collected using a rigorous nationally representative sampling protocol to determine whether gluten was present. The findings of our research uphold the understanding of the principles of gluten digestion in beef cattle and corroborate recommendations that recognize beef as a naturally gluten-free food.

  8. Quantification of type I error probabilities for heterogeneity LOD scores.

    PubMed

    Abreu, Paula C; Hodge, Susan E; Greenberg, David A

    2002-02-01

    Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.

  9. A rigorous analysis of digital pre-emphasis and DAC resolution for interleaved DAC Nyquist-WDM signal generation in high-speed coherent optical transmission systems

    NASA Astrophysics Data System (ADS)

    Weng, Yi; Wang, Junyi; He, Xuan; Pan, Zhongqi

    2018-02-01

    The Nyquist spectral shaping techniques facilitate a promising solution to enhance spectral efficiency (SE) and further reduce the cost-per-bit in high-speed wavelength-division multiplexing (WDM) transmission systems. Hypothetically, any Nyquist WDM signals with arbitrary shapes can be generated by the use of the digital signal processing (DSP) based electrical filters (E-filter). Nonetheless, in actual 100G/ 200G coherent systems, the performance as well as DSP complexity are increasingly restricted by cost and power consumption. Henceforward it is indispensable to optimize DSP to accomplish the preferred performance at the least complexity. In this paper, we systematically investigated the minimum requirements and challenges of Nyquist WDM signal generation, particularly for higher-order modulation formats, including 16 quadrature amplitude modulation (QAM) or 64QAM. A variety of interrelated parameters, such as channel spacing and roll-off factor, have been evaluated to optimize the requirements of the digital-to-analog converter (DAC) resolution and transmitter E-filter bandwidth. The impact of spectral pre-emphasis has been predominantly enhanced via the proposed interleaved DAC architecture by at least 4%, and hence reducing the required optical signal to noise ratio (OSNR) at a bit error rate (BER) of 10-3 by over 0.45 dB at a channel spacing of 1.05 symbol rate and an optimized roll-off factor of 0.1. Furthermore, the requirements of sampling rate for different types of super-Gaussian E-filters are discussed for 64QAM Nyquist WDM transmission systems. Finally, the impact of the non-50% duty cycle error between sub-DACs upon the quality of the generated signals for the interleaved DAC structure has been analyzed.

  10. Vitamin D intoxication due to an erroneously manufactured dietary supplement in seven children.

    PubMed

    Kara, Cengiz; Gunindi, Figen; Ustyol, Ala; Aydin, Murat

    2014-01-01

    Pediatric cases of vitamin D intoxication (VDI) with dietary supplements have not been previously reported. We report on 7 children with VDI caused by consumption of a fish oil supplement containing an excessively high dose of vitamin D due to a manufacturing error. Seven children aged between 0.7 and 4.2 years were admitted with symptoms of hypercalcemia. Initial median (range) serum concentrations of calcium and 25-hydroxyvitamin D were 16.5 (13.4-18.8) mg/dL and 620 (340-962) ng/mL, respectively. Repeated questioning of the parents revealed use of a fish oil that was produced recently by a local manufacturer. Analysis of the fish oil by gas chromatography/mass spectrometry revealed that the vitamin D3 content was ~4000 times the labeled concentration. Estimated daily amounts of vitamin D3 intake varied between 266,000 and 800,000 IU. Patients were successfully treated with intravenous hydration, furosemide, and pamidronate infusions. With treatment, serum calcium returned to the normal range within 3 days (range: 2-7 days). Serum 25-hydroxyvitamin D levels normalized within 2 to 3 months. Complications, including nephrocalcinosis, were not observed throughout the 1-year follow-up. In conclusion, errors in manufacturing of dietary supplements may be a cause of VDI in children. Physicians should be aware of this possibility in unexplained VDI cases and repeatedly question the families about dietary supplement use. To prevent the occurrence of such unintentional incidents, manufacturers must always monitor the levels of ingredients of their products and should be rigorously overseen by governmental regulatory agencies, as is done in the pharmaceutical industry.

  11. Identifying significant gene‐environment interactions using a combination of screening testing and hierarchical false discovery rate control

    PubMed Central

    Shen, Li; Saykin, Andrew J.; Williams, Scott M.; Moore, Jason H.

    2016-01-01

    ABSTRACT Although gene‐environment (G× E) interactions play an important role in many biological systems, detecting these interactions within genome‐wide data can be challenging due to the loss in statistical power incurred by multiple hypothesis correction. To address the challenge of poor power and the limitations of existing multistage methods, we recently developed a screening‐testing approach for G× E interaction detection that combines elastic net penalized regression with joint estimation to support a single omnibus test for the presence of G× E interactions. In our original work on this technique, however, we did not assess type I error control or power and evaluated the method using just a single, small bladder cancer data set. In this paper, we extend the original method in two important directions and provide a more rigorous performance evaluation. First, we introduce a hierarchical false discovery rate approach to formally assess the significance of individual G× E interactions. Second, to support the analysis of truly genome‐wide data sets, we incorporate a score statistic‐based prescreening step to reduce the number of single nucleotide polymorphisms prior to fitting the first stage penalized regression model. To assess the statistical properties of our method, we compare the type I error rate and statistical power of our approach with competing techniques using both simple simulation designs as well as designs based on real disease architectures. Finally, we demonstrate the ability of our approach to identify biologically plausible SNP‐education interactions relative to Alzheimer's disease status using genome‐wide association study data from the Alzheimer's Disease Neuroimaging Initiative (ADNI). PMID:27578615

  12. Estimating cosmic velocity fields from density fields and tidal tensors

    NASA Astrophysics Data System (ADS)

    Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan

    2012-10-01

    In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.

  13. Design and analysis of group-randomized trials in cancer: A review of current practices.

    PubMed

    Murray, David M; Pals, Sherri L; George, Stephanie M; Kuzmichev, Andrey; Lai, Gabriel Y; Lee, Jocelyn A; Myles, Ranell L; Nelson, Shakira M

    2018-06-01

    The purpose of this paper is to summarize current practices for the design and analysis of group-randomized trials involving cancer-related risk factors or outcomes and to offer recommendations to improve future trials. We searched for group-randomized trials involving cancer-related risk factors or outcomes that were published or online in peer-reviewed journals in 2011-15. During 2016-17, in Bethesda MD, we reviewed 123 articles from 76 journals to characterize their design and their methods for sample size estimation and data analysis. Only 66 (53.7%) of the articles reported appropriate methods for sample size estimation. Only 63 (51.2%) reported exclusively appropriate methods for analysis. These findings suggest that many investigators do not adequately attend to the methodological challenges inherent in group-randomized trials. These practices can lead to underpowered studies, to an inflated type 1 error rate, and to inferences that mislead readers. Investigators should work with biostatisticians or other methodologists familiar with these issues. Funders and editors should ensure careful methodological review of applications and manuscripts. Reviewers should ensure that studies are properly planned and analyzed. These steps are needed to improve the rigor and reproducibility of group-randomized trials. The Office of Disease Prevention (ODP) at the National Institutes of Health (NIH) has taken several steps to address these issues. ODP offers an online course on the design and analysis of group-randomized trials. ODP is working to increase the number of methodologists who serve on grant review panels. ODP has developed standard language for the Application Guide and the Review Criteria to draw investigators' attention to these issues. Finally, ODP has created a new Research Methods Resources website to help investigators, reviewers, and NIH staff better understand these issues. Published by Elsevier Inc.

  14. Optimal and robust control of transition

    NASA Technical Reports Server (NTRS)

    Bewley, T. R.; Agarwal, R.

    1996-01-01

    Optimal and robust control theories are used to determine feedback control rules that effectively stabilize a linearly unstable flow in a plane channel. Wall transpiration (unsteady blowing/suction) with zero net mass flux is used as the control. Control algorithms are considered that depend both on full flowfield information and on estimates of that flowfield based on wall skin-friction measurements only. The development of these control algorithms accounts for modeling errors and measurement noise in a rigorous fashion; these disturbances are considered in both a structured (Gaussian) and unstructured ('worst case') sense. The performance of these algorithms is analyzed in terms of the eigenmodes of the resulting controlled systems, and the sensitivity of individual eigenmodes to both control and observation is quantified.

  15. Cigarette smoking and lung cancer: a continuing controversy.

    PubMed

    Burch, P R

    1982-09-01

    During the late 1950s Sir Ronald Fisher questioned the already popular, but in his view precipitate, causal interpretation of the association between smoking and lung cancer. His pungently expressed views began a controversy that has smouldered and sometimes flared ever since. The most recent attack on Fisher's constitutional hypothesis was launched by Reif and in this paper I consider the validity of his criticisms. A range of evidence shows that it is not yet possible to distinguish between constitutional and causal-plus-constitutional interpretations although recent studies indicate that a pure causal hypothesis is incapable of explaining the full association as observed in Western populations. Unfortunately, errors of diagnosis and death certification still impede the rigorous testing of adequately formulated hypotheses.

  16. Tray estimates for low reflux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barna, B.A.; Ginn, R.F.

    1985-05-01

    In computer programs which perform shortcut calculations for multicomponent distillation, the Gilliland correlation continues to be used even though errors of up to 60% (compared with rigorous plate-to-plate calculations) were shown by Erbar and Maddox. Average absolute differences were approximately 30% for Gilliland's correlation versus 4% for the Erbar-Maddox method. The reason the Gilliland correlation continues to be used appears to be due to the availability of an equation by Eduljee which facilitates the correlation's use in computer program. A new equation is presented in this paper that represents the Erbar-Maddox correlation of trays with reflux for multicomponent distillation. Atmore » low reflux ratios, results show more trays are needed than would be estimated by Gilliland's method.« less

  17. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory A.; Ingham, Michel D.; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    Innovative systems and software engineering solutions are required to meet the increasingly challenging demands of deep-space robotic missions. While recent advances in the development of an integrated systems and software engineering approach have begun to address some of these issues, they are still at the core highly manual and, therefore, error-prone. This paper describes a task aimed at infusing MIT's model-based executive, Titan, into JPL's Mission Data System (MDS), a unified state-based architecture, systems engineering process, and supporting software framework. Results of the task are presented, including a discussion of the benefits and challenges associated with integrating mature model-based programming techniques and technologies into a rigorously-defined domain specific architecture.

  18. Decoy-state quantum key distribution with biased basis choice

    PubMed Central

    Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng

    2013-01-01

    We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states. PMID:23948999

  19. Boundary control for a constrained two-link rigid-flexible manipulator with prescribed performance

    NASA Astrophysics Data System (ADS)

    Cao, Fangfei; Liu, Jinkun

    2018-05-01

    In this paper, we consider a boundary control problem for a constrained two-link rigid-flexible manipulator. The nonlinear system is described by hybrid ordinary differential equation-partial differential equation (ODE-PDE) dynamic model. Based on the coupled ODE-PDE model, boundary control is proposed to regulate the joint positions and eliminate the elastic vibration simultaneously. With the help of prescribed performance functions, the tracking error can converge to an arbitrarily small residual set and the convergence rate is no less than a certain pre-specified value. Asymptotic stability of the closed-loop system is rigorously proved by the LaSalle's Invariance Principle extended to infinite-dimensional system. Numerical simulations are provided to demonstrate the effectiveness of the proposed controller.

  20. Decoy-state quantum key distribution with biased basis choice.

    PubMed

    Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng

    2013-01-01

    We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states.

  1. Fast simulation of the NICER instrument

    NASA Astrophysics Data System (ADS)

    Doty, John P.; Wampler-Doty, Matthew P.; Prigozhin, Gregory Y.; Okajima, Takashi; Arzoumanian, Zaven; Gendreau, Keith

    2016-07-01

    The NICER1 mission uses a complicated physical system to collect information from objects that are, by x-ray timing science standards, rather faint. To get the most out of the data we will need a rigorous understanding of all instrumental effects. We are in the process of constructing a very fast, high fidelity simulator that will help us to assess instrument performance, support simulation-based data reduction, and improve our estimates of measurement error. We will combine and extend existing optics, detector, and electronics simulations. We will employ the Compute Unified Device Architecture (CUDA2) to parallelize these calculations. The price of suitable CUDA-compatible multi-giga op cores is about $0.20/core, so this approach will be very cost-effective.

  2. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  3. Error-Analysis for Correctness, Effectiveness, and Composing Procedure.

    ERIC Educational Resources Information Center

    Ewald, Helen Rothschild

    The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…

  4. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  5. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2016-11-01

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  6. The impact of response measurement error on the analysis of designed experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  7. Scientific Data Analysis Toolkit: A Versatile Add-in to Microsoft Excel for Windows

    ERIC Educational Resources Information Center

    Halpern, Arthur M.; Frye, Stephen L.; Marzzacco, Charles J.

    2018-01-01

    Scientific Data Analysis Toolkit (SDAT) is a rigorous, versatile, and user-friendly data analysis add-in application for Microsoft Excel for Windows (PC). SDAT uses the familiar Excel environment to carry out most of the analytical tasks used in data analysis. It has been designed for student use in manipulating and analyzing data encountered in…

  8. On the Tracy-Widomβ Distribution for β=6

    NASA Astrophysics Data System (ADS)

    Grava, Tamara; Its, Alexander; Kapaev, Andrei; Mezzadri, Francesco

    2016-11-01

    We study the Tracy-Widom distribution function for Dyson's β-ensemble with β = 6. The starting point of our analysis is the recent work of I. Rumanov where he produces a Lax-pair representation for the Bloemendal-Virág equation. The latter is a linear PDE which describes the Tracy-Widom functions corresponding to general values of β. Using his Lax pair, Rumanov derives an explicit formula for the Tracy-Widom β=6 function in terms of the second Painlevé transcendent and the solution of an auxiliary ODE. Rumanov also shows that this formula allows him to derive formally the asymptotic expansion of the Tracy-Widom function. Our goal is to make Rumanov's approach and hence the asymptotic analysis it provides rigorous. In this paper, the first one in a sequel, we show that Rumanov's Lax-pair can be interpreted as a certain gauge transformation of the standard Lax pair for the second Painlevé equation. This gauge transformation though contains functional parameters which are defined via some auxiliary nonlinear ODE which is equivalent to the auxiliary ODE of Rumanov's formula. The gauge-interpretation of Rumanov's Lax-pair allows us to highlight the steps of the original Rumanov's method which needs rigorous justifications in order to make the method complete. We provide a rigorous justification of one of these steps. Namely, we prove that the Painlevé function involved in Rumanov's formula is indeed, as it has been suggested by Rumanov, the Hastings-McLeod solution of the second Painlevé equation. The key issue which we also discuss and which is still open is the question of integrability of the auxiliary ODE in Rumanov's formula. We note that this question is crucial for the rigorous asymptotic analysis of the Tracy-Widom function. We also notice that our work is a partial answer to one of the problems related to the β-ensembles formulated by Percy Deift during the June 2015 Montreal Conference on integrable systems.

  9. Orbital State Uncertainty Realism

    NASA Astrophysics Data System (ADS)

    Horwood, J.; Poore, A. B.

    2012-09-01

    Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten times as long* as the latter. The filter correction step also furnishes a statistically rigorous *prediction error* which appears in the likelihood ratios for scoring the association of one report or observation to another. Thus, the new filter can be used to support multi-target tracking within a general multiple hypothesis tracking framework. Additionally, the new distribution admits a distance metric which extends the classical Mahalanobis distance (chi^2 statistic). This metric provides a test for statistical significance and facilitates single-frame data association methods with the potential to easily extend the covariance-based track association algorithm of Hill, Sabol, and Alfriend. The filtering, data fusion, and association methods using the new class of orbital state PDFs are shown to be mathematically tractable and operationally viable.

  10. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  11. Cost-Effectiveness Analysis of Early Reading Programs: A Demonstration with Recommendations for Future Research

    ERIC Educational Resources Information Center

    Hollands, Fiona M.; Kieffer, Michael J.; Shand, Robert; Pan, Yilin; Cheng, Henan; Levin, Henry M.

    2016-01-01

    We review the value of cost-effectiveness analysis for evaluation and decision making with respect to educational programs and discuss its application to early reading interventions. We describe the conditions for a rigorous cost-effectiveness analysis and illustrate the challenges of applying the method in practice, providing examples of programs…

  12. Feeding Problems and Nutrient Intake in Children with Autism Spectrum Disorders: A Meta-Analysis and Comprehensive Review of the Literature

    ERIC Educational Resources Information Center

    Sharp, William G.; Berry, Rashelle C.; McCracken, Courtney; Nuhu, Nadrat N.; Marvel, Elizabeth; Saulnier, Celine A.; Klin, Ami; Jones, Warren; Jaquess, David L.

    2013-01-01

    We conducted a comprehensive review and meta-analysis of research regarding feeding problems and nutrient status among children with autism spectrum disorders (ASD). The systematic search yielded 17 prospective studies involving a comparison group. Using rigorous meta-analysis techniques, we calculated the standardized mean difference (SMD) with…

  13. Can power-law scaling and neuronal avalanches arise from stochastic dynamics?

    PubMed

    Touboul, Jonathan; Destexhe, Alain

    2010-02-11

    The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.

  14. Understanding information exchange during disaster response: Methodological insights from infocentric analysis

    Treesearch

    Toddi A. Steelman; Branda Nowell; Deena Bayoumi; Sarah McCaffrey

    2014-01-01

    We leverage economic theory, network theory, and social network analytical techniques to bring greater conceptual and methodological rigor to understand how information is exchanged during disasters. We ask, "How can information relationships be evaluated more systematically during a disaster response?" "Infocentric analysis"—a term and...

  15. Driven and No Regrets: A Qualitative Analysis of Students Earning Baccalaureate Degrees in Three Years

    ERIC Educational Resources Information Center

    Firmin, Michael W.; Gilson, Krista Merrick

    2007-01-01

    Using rigorous qualitative research methodology, twenty-four college students receiving their undergraduate degrees in three years were interviewed. Following analysis of the semi-structured interview transcripts and coding, themes emerged, indicating that these students possessed self-discipline, self-motivation, and drive. Overall, the results…

  16. Gender, Discourse, and "Gender and Discourse."

    ERIC Educational Resources Information Center

    Davis, Hayley

    1997-01-01

    A critic of Deborah Tannen's book "Gender and Discourse" responds to comments made about her critique, arguing that the book's analysis of the relationship of gender and discourse tends to seek, and perhaps force, explanations only in those terms. Another linguist's analysis of similar phenomena is found to be more rigorous. (MSE)

  17. Exploration of the Maximum Entropy/Optimal Projection Approach to Control Design Synthesis for Large Space Structures.

    DTIC Science & Technology

    1985-02-01

    Energy Analysis , a branch of dynamic modal analysis developed for analyzing acoustic vibration problems, its present stage of development embodies a...Maximum Entropy Stochastic Modelling and Reduced-Order Design Synthesis is a rigorous new approach to this class of problems. Inspired by Statistical

  18. Evaluating Computer-Related Incidents on Campus

    ERIC Educational Resources Information Center

    Rothschild, Daniel; Rezmierski, Virginia

    2004-01-01

    The Computer Incident Factor Analysis and Categorization (CIFAC) Project at the University of Michigan began in September 2003 with grants from EDUCAUSE and the National Science Foundation (NSF). The project's primary goal is to create a best-practices security framework for colleges and universities based on rigorous quantitative analysis of…

  19. Prevention of a wrong-location misadministration through the use of an intradepartmental incident learning system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, Eric C.; Smith, Koren; Harris, Kendra

    2012-11-15

    Purpose: A series of examples are presented in which potential errors in the delivery of radiation therapy were prevented through use of incident learning. These examples underscore the value of reporting near miss incidents. Methods: Using a departmental incident learning system, eight incidents were noted over a two-year period in which fields were treated 'out-of-sequence,' that is, fields from a boost phase were treated, while the patient was still in the initial phase of treatment. As a result, an error-prevention policy was instituted in which radiation treatment fields are 'hidden' within the oncology information system (OIS) when they are notmore » in current use. In this way, fields are only available to be treated in the intended sequence and, importantly, old fields cannot be activated at the linear accelerator control console. Results: No out-of-sequence treatments have been reported in more than two years since the policy change. Furthermore, at least three near-miss incidents were detected and corrected as a result of the policy change. In the first two, the policy operated as intended to directly prevent an error in field scheduling. In the third near-miss, the policy operated 'off target' to prevent a type of error scenario that it was not directly intended to prevent. In this incident, an incorrect digitally reconstructed radiograph (DRR) was scheduled in the OIS for a patient receiving lung cancer treatment. The incorrect DRR had an isocenter which was misplaced by approximately two centimeters. The error was a result of a field from an old plan being scheduled instead of the intended new plan. As a result of the policy described above, the DRR field could not be activated for treatment however and the error was discovered and corrected. Other quality control barriers in place would have been unlikely to have detected this error. Conclusions: In these examples, a policy was adopted based on incident learning, which prevented several errors, at least one of which was potentially severe. These examples underscore the need for a rigorous, systematic incident learning process within each clinic. The experiences reported in this technical note demonstrate the value of near-miss incident reporting to improve patient safety.« less

  20. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  2. Development of advanced methods for analysis of experimental data in diffusion

    NASA Astrophysics Data System (ADS)

    Jaques, Alonso V.

    There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.

  3. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  4. The effect of meteorological data on atmospheric pressure loading corrections in VLBI data analysis

    NASA Astrophysics Data System (ADS)

    Balidakis, Kyriakos; Glaser, Susanne; Karbon, Maria; Soja, Benedikt; Nilsson, Tobias; Lu, Cuixian; Anderson, James; Liu, Li; Andres Mora-Diaz, Julian; Raposo-Pulido, Virginia; Xu, Minghui; Heinkelmann, Robert; Schuh, Harald

    2015-04-01

    Earth's crustal deformation is a manifestation of numerous geophysical processes, which entail the atmosphere and ocean general circulation and tidal attraction, climate change, and the hydrological circle. The present study deals with the elastic deformations induced by atmospheric pressure variations. At geodetic sites, APL (Atmospheric Pressure Loading) results in displacements covering a wide range of temporal scales which is undesirable when rigorous geodetic/geophysical analysis is intended. Hence, it is of paramount importance that the APL signal are removed at the observation level in the space geodetic data analysis. In this study, elastic non-tidal components of loading displacements were calculated in the local topocentric frame for all VLBI (Very Long Baseline Interferometry) stations with respect to the center-of-figure of the solid Earth surface and the center-of-mass of the total Earth system. The response of the Earth to the load variation at the surface was computed by convolving Farrell Green's function with the homogenized in situ surface pressure observations (in the time span 1979-2014) after the subtraction of the reference pressure and the S1, S2 and S3 thermal tidal signals. The reference pressure was calculated through a hypsometric adjustment of the absolute pressure level determined from World Meteorological Organization stations in the vicinity of each VLBI observatory. The tidal contribution was calculated following the 2010 International Earth Rotation and Reference Systems Service conventions. Afterwards, this approach was implemented into the VLBI software VieVS@GFZ and the entirety of available VLBI sessions was analyzed. We rationalize our new approach on the basis that the potential error budget is substantially reduced, since several common errors are not applicable in our approach, e.g. those due to the finite resolution of NWM (Numerical Weather Models), the accuracy of the orography model necessary for adjusting the former as well as the inconsistencies between them, and the interpolation scheme which yields the elastic deformations. Differences of the resulting TRF (Terrestrial Reference Frame) determinations and other products derived from VLBI analysis between the approach followed here and the one employing NWM's data for obtaining the input pressure fields, are illustrated. The providers of the atmospheric pressure loading models employed for our comparisons are GSFC/NASA, the University of Luxembourg, the University of Strasbourg, the Technical University of Vienna and GeoForschungsZentrum of Potsdam.

  5. A video multitracking system for quantification of individual behavior in a large fish shoal: advantages and limits.

    PubMed

    Delcourt, Johann; Becco, Christophe; Vandewalle, Nicolas; Poncin, Pascal

    2009-02-01

    The capability of a new multitracking system to track a large number of unmarked fish (up to 100) is evaluated. This system extrapolates a trajectory from each individual and analyzes recorded sequences that are several minutes long. This system is very efficient in statistical individual tracking, where the individual's identity is important for a short period of time in comparison with the duration of the track. Individual identification is typically greater than 99%. Identification is largely efficient (more than 99%) when the fish images do not cross the image of a neighbor fish. When the images of two fish merge (occlusion), we consider that the spot on the screen has a double identity. Consequently, there are no identification errors during occlusions, even though the measurement of the positions of each individual is imprecise. When the images of these two merged fish separate (separation), individual identification errors are more frequent, but their effect is very low in statistical individual tracking. On the other hand, in complete individual tracking, where individual fish identity is important for the entire trajectory, each identification error invalidates the results. In such cases, the experimenter must observe whether the program assigns the correct identification, and, when an error is made, must edit the results. This work is not too costly in time because it is limited to the separation events, accounting for fewer than 0.1% of individual identifications. Consequently, in both statistical and rigorous individual tracking, this system allows the experimenter to gain time by measuring the individual position automatically. It can also analyze the structural and dynamic properties of an animal group with a very large sample, with precision and sampling that are impossible to obtain with manual measures.

  6. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  7. Experimental evaluation of rigor mortis. V. Effect of various temperatures on the evolution of rigor mortis.

    PubMed

    Krompecher, T

    1981-01-01

    Objective measurements were carried out to study the evolution of rigor mortis on rats at various temperatures. Our experiments showed that: (1) at 6 degrees C rigor mortis reaches full development between 48 and 60 hours post mortem, and is resolved at 168 hours post mortem; (2) at 24 degrees C rigor mortis reaches full development at 5 hours post mortem, and is resolved at 16 hours post mortem; (3) at 37 degrees C rigor mortis reaches full development at 3 hours post mortem, and is resolved at 6 hours post mortem; (4) the intensity of rigor mortis grows with increase in temperature (difference between values obtained at 24 degrees C and 37 degrees C); and (5) and 6 degrees C a "cold rigidity" was found, in addition to and independent of rigor mortis.

  8. A Research Communication Brief: Gluten Analysis in Beef Samples Collected Using a Rigorous, Nationally Representative Sampling Protocol Confirms That Grain-Finished Beef Is Naturally Gluten-Free

    PubMed Central

    McNeill, Shalene H.; Cifelli, Amy M.; Roseland, Janet M.; Belk, Keith E.; Gehring, Kerri B.; Brooks, J. Chance; Thompson, Leslie D.

    2017-01-01

    Knowing whether or not a food contains gluten is vital for the growing number of individuals with celiac disease and non-celiac gluten sensitivity. Questions have recently been raised about whether beef from conventionally-raised, grain-finished cattle may contain gluten. To date, basic principles of ruminant digestion have been cited in support of the prevailing expert opinion that beef is inherently gluten-free. For this study, gluten analysis was conducted in beef samples collected using a rigorous nationally representative sampling protocol to determine whether gluten was present. The findings of our research uphold the understanding of the principles of gluten digestion in beef cattle and corroborate recommendations that recognize beef as a naturally gluten-free food. PMID:28841165

  9. Avoidable errors in deposited macromolecular structures: an impediment to efficient data mining.

    PubMed

    Dauter, Zbigniew; Wlodawer, Alexander; Minor, Wladek; Jaskolski, Mariusz; Rupp, Bernhard

    2014-05-01

    Whereas the vast majority of the more than 85 000 crystal structures of macromolecules currently deposited in the Protein Data Bank are of high quality, some suffer from a variety of imperfections. Although this fact has been pointed out in the past, it is still worth periodic updates so that the metadata obtained by global analysis of the available crystal structures, as well as the utilization of the individual structures for tasks such as drug design, should be based on only the most reliable data. Here, selected abnormal deposited structures have been analysed based on the Bayesian reasoning that the correctness of a model must be judged against both the primary evidence as well as prior knowledge. These structures, as well as information gained from the corresponding publications (if available), have emphasized some of the most prevalent types of common problems. The errors are often perfect illustrations of the nature of human cognition, which is frequently influenced by preconceptions that may lead to fanciful results in the absence of proper validation. Common errors can be traced to negligence and a lack of rigorous verification of the models against electron density, creation of non-parsimonious models, generation of improbable numbers, application of incorrect symmetry, illogical presentation of the results, or violation of the rules of chemistry and physics. Paying more attention to such problems, not only in the final validation stages but during the structure-determination process as well, is necessary not only in order to maintain the highest possible quality of the structural repositories and databases but most of all to provide a solid basis for subsequent studies, including large-scale data-mining projects. For many scientists PDB deposition is a rather infrequent event, so the need for proper training and supervision is emphasized, as well as the need for constant alertness of reason and critical judgment as absolutely necessary safeguarding measures against such problems. Ways of identifying more problematic structures are suggested so that their users may be properly alerted to their possible shortcomings.

  10. Avoidable errors in deposited macromolecular structures: an impediment to efficient data mining

    PubMed Central

    Dauter, Zbigniew; Wlodawer, Alexander; Minor, Wladek; Jaskolski, Mariusz; Rupp, Bernhard

    2014-01-01

    Whereas the vast majority of the more than 85 000 crystal structures of macromolecules currently deposited in the Protein Data Bank are of high quality, some suffer from a variety of imperfections. Although this fact has been pointed out in the past, it is still worth periodic updates so that the metadata obtained by global analysis of the available crystal structures, as well as the utilization of the individual structures for tasks such as drug design, should be based on only the most reliable data. Here, selected abnormal deposited structures have been analysed based on the Bayesian reasoning that the correctness of a model must be judged against both the primary evidence as well as prior knowledge. These structures, as well as information gained from the corresponding publications (if available), have emphasized some of the most prevalent types of common problems. The errors are often perfect illustrations of the nature of human cognition, which is frequently influenced by preconceptions that may lead to fanciful results in the absence of proper validation. Common errors can be traced to negligence and a lack of rigorous verification of the models against electron density, creation of non-parsimonious models, generation of improbable numbers, application of incorrect symmetry, illogical presentation of the results, or violation of the rules of chemistry and physics. Paying more attention to such problems, not only in the final validation stages but during the structure-determination process as well, is necessary not only in order to maintain the highest possible quality of the structural repositories and databases but most of all to provide a solid basis for subsequent studies, including large-scale data-mining projects. For many scientists PDB deposition is a rather infrequent event, so the need for proper training and supervision is emphasized, as well as the need for constant alertness of reason and critical judgment as absolutely necessary safeguarding measures against such problems. Ways of identifying more problematic structures are suggested so that their users may be properly alerted to their possible shortcomings. PMID:25075337

  11. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  12. A 2-D numerical simulation study on longitudinal solute transport and longitudinal dispersion coefficient

    NASA Astrophysics Data System (ADS)

    Zhang, Wei

    2011-07-01

    The longitudinal dispersion coefficient, DL, is a fundamental parameter of longitudinal solute transport models: the advection-dispersion (AD) model and various deadzone models. Since DL cannot be measured directly, and since its calibration using tracer test data is quite expensive and not always available, researchers have developed various methods, theoretical or empirical, for estimating DL by easier available cross-sectional hydraulic measurements (i.e., the transverse velocity profile, etc.). However, for known and unknown reasons, DL cannot be satisfactorily predicted using these theoretical/empirical formulae. Either there is very large prediction error for theoretical methods, or there is a lack of generality for the empirical formulae. Here, numerical experiments using Mike21, a software package that implements one of the most rigorous two-dimensional hydrodynamic and solute transport equations, for longitudinal solute transport in hypothetical streams, are presented. An analysis of the evolution of simulated solute clouds indicates that the two fundamental assumptions in Fischer's longitudinal transport analysis may be not reasonable. The transverse solute concentration distribution, and hence the longitudinal transport appears to be controlled by a dimensionless number ?, where Q is the average volumetric flowrate, Dt is a cross-sectional average transverse dispersion coefficient, and W is channel flow width. A simple empirical ? relationship may be established. Analysis and a revision of Fischer's theoretical formula suggest that ɛ influences the efficiency of transverse mixing and hence has restraining effect on longitudinal spreading. The findings presented here would improve and expand our understanding of longitudinal solute transport in open channel flow.

  13. Teamwork in the operating room: frontline perspectives among hospitals and operating room personnel.

    PubMed

    Sexton, J Bryan; Makary, Martin A; Tersigni, Anthony R; Pryor, David; Hendrich, Ann; Thomas, Eric J; Holzmueller, Christine G; Knight, Andrew P; Wu, Yun; Pronovost, Peter J

    2006-11-01

    The Joint Commission on Accreditation of Healthcare Organizations is proposing that hospitals measure culture beginning in 2007. However, a reliable and widely used measurement tool for the operating room (OR) setting does not currently exist. OR personnel in 60 US hospitals were surveyed using the Safety Attitudes Questionnaire. The teamwork climate domain of the survey uses six items about difficulty speaking up, conflict resolution, physician-nurse collaboration, feeling supported by others, asking questions, and heeding nurse input. To justify grouping individual-level responses to a single score at each hospital OR level, the authors used a multilevel confirmatory factor analysis, intraclass correlations, within-group interrater reliability, and Cronbach's alpha. To detect differences at the hospital OR level and by caregiver type, the authors used multivariate analysis of variance (items) and analysis of variance (scale). The response rate was 77.1%. There was robust evidence for grouping individual-level respondents to the hospital OR level using the diverse set of statistical tests, e.g., Comparative Fit Index = 0.99, root mean squared error of approximation = 0.05, and acceptable intraclasss correlations, within-group interrater reliability values, and Cronbach's alpha = 0.79. Teamwork climate differed significantly by hospital (F59, 1,911 = 4.06, P < 0.001) and OR caregiver type (F4, 1,911 = 9.96, P < 0.001). Rigorous assessment of teamwork climate is possible using this psychometrically sound teamwork climate scale. This tool and initial benchmarks allow others to compare their teamwork climate to national means, in an effort to focus more on what excellent surgical teams do well.

  14. Treetrimmer: a method for phylogenetic dataset size reduction.

    PubMed

    Maruyama, Shinichiro; Eveleigh, Robert J M; Archibald, John M

    2013-04-12

    With rapid advances in genome sequencing and bioinformatics, it is now possible to generate phylogenetic trees containing thousands of operational taxonomic units (OTUs) from a wide range of organisms. However, use of rigorous tree-building methods on such large datasets is prohibitive and manual 'pruning' of sequence alignments is time consuming and raises concerns over reproducibility. There is a need for bioinformatic tools with which to objectively carry out such pruning procedures. Here we present 'TreeTrimmer', a bioinformatics procedure that removes unnecessary redundancy in large phylogenetic datasets, alleviating the size effect on more rigorous downstream analyses. The method identifies and removes user-defined 'redundant' sequences, e.g., orthologous sequences from closely related organisms and 'recently' evolved lineage-specific paralogs. Representative OTUs are retained for more rigorous re-analysis. TreeTrimmer reduces the OTU density of phylogenetic trees without sacrificing taxonomic diversity while retaining the original tree topology, thereby speeding up downstream computer-intensive analyses, e.g., Bayesian and maximum likelihood tree reconstructions, in a reproducible fashion.

  15. Six Common Mistakes in Conservation Priority Setting

    PubMed Central

    Game, Edward T; Kareiva, Peter; Possingham, Hugh P

    2013-01-01

    Abstract A vast number of prioritization schemes have been developed to help conservation navigate tough decisions about the allocation of finite resources. However, the application of quantitative approaches to setting priorities in conservation frequently includes mistakes that can undermine their authors’ intention to be more rigorous and scientific in the way priorities are established and resources allocated. Drawing on well-established principles of decision science, we highlight 6 mistakes commonly associated with setting priorities for conservation: not acknowledging conservation plans are prioritizations; trying to solve an ill-defined problem; not prioritizing actions; arbitrariness; hidden value judgments; and not acknowledging risk of failure. We explain these mistakes and offer a path to help conservation planners avoid making the same mistakes in future prioritizations. Seis Errores Comunes en la Definición de Prioridades de Conservación Resumen Se ha desarrollado un vasto número de esquemas de priorización para ayudar a que la conservación navegue entre decisiones difíciles en cuanto a la asignación de recursos finitos. Sin embargo, la aplicación de métodos cuantitativos para la definición de prioridades en la conservación frecuentemente incluye errores que pueden socavar la intención de sus autores de ser más rigurosos y científicos en la manera en que se establecen las prioridades y se asignan los recursos. Con base en los bien establecidos principios de la ciencia de la decisión, resaltamos seis errores comúnmente asociados con la definición de prioridades para la conservación: no reconocer que los planes de conservación son priorizaciones; tratar de resolver un problema mal definido; no priorizar acciones; arbitrariedad; juicios de valor ocultos y no reconocer el riesgo de fracasar. Explicamos estos errores y ofrecemos un camino para que planificadores de la conservación no cometan los mismos errores en priorizaciones futuras. PMID:23565990

  16. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  17. Academic Rigor in the College Classroom: Two Federal Commissions Strive to Define Rigor in the Past 70 Years

    ERIC Educational Resources Information Center

    Francis, Clay

    2018-01-01

    Historic notions of academic rigor usually follow from critiques of the system--we often define our goals for academically rigorous work through the lens of our shortcomings. This chapter discusses how the Truman Commission in 1947 and the Spellings Commission in 2006 shaped the way we think about academic rigor in today's context.

  18. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  19. Emergency cricothyrotomy for trismus caused by instantaneous rigor in cardiac arrest patients.

    PubMed

    Lee, Jae Hee; Jung, Koo Young

    2012-07-01

    Instantaneous rigor as muscle stiffening occurring in the moment of death (or cardiac arrest) can be confused with rigor mortis. If trismus is caused by instantaneous rigor, orotracheal intubation is impossible and a surgical airway should be secured. Here, we report 2 patients who had emergency cricothyrotomy for trismus caused by instantaneous rigor. This case report aims to help physicians understand instantaneous rigor and to emphasize the importance of securing a surgical airway quickly on the occurrence of trismus. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. The use of experimental data in an MTR-type nuclear reactor safety analysis

    NASA Astrophysics Data System (ADS)

    Day, Simon E.

    Reactivity initiated accidents (RIAs) are a category of events required for research reactor safety analysis. A subset of this is unprotected RIAs in which mechanical systems or human intervention are not credited in the response of the system. Light-water cooled and moderated MTR-type ( i.e., aluminum-clad uranium plate fuel) reactors are self-limiting up to some reactivity insertion limit beyond which fuel damage occurs. This characteristic was studied in the Borax and Spert reactor tests of the 1950s and 1960s in the USA. This thesis considers the use of this experimental data in generic MTR-type reactor safety analysis. The approach presented herein is based on fundamental phenomenological understanding and uses correlations in the reactor test data with suitable account taken for differences in important system parameters. Specifically, a semi-empirical approach is used to quantify the relationship between the power, energy and temperature rise response of the system as well as parametric dependencies on void coefficient and the degree of subcooling. Secondary effects including the dependence on coolant flow are also examined. A rigorous curve fitting approach and error assessment is used to quantify the trends in the experimental data. In addition to the initial power burst stage of an unprotected transient, the longer term stability of the system is considered with a stylized treatment of characteristic power/temperature oscillations (chugging). A bridge from the HEU-based experimental data to the LEU fuel cycle is assessed and outlined based on existing simulation results presented in the literature. A cell-model based parametric study is included. The results are used to construct a practical safety analysis methodology for determining reactivity insertion safety limits for a light-water moderated and cooled MTR-type core.

  1. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    PubMed

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  2. Using the Origin and Pawn, Positive Affect, CASPM, and Cognitive Anxiety Content Analysis Scales in Counseling Research

    ERIC Educational Resources Information Center

    Viney, Linda L.; Caputi, Peter

    2005-01-01

    Content analysis scales apply rigorous measurement to verbal communications and make possible the quantification of text in counseling research. The limitations of the Origin and Pawn Scales (M. T. Westbrook & L. L. Viney, 1980), the Positive Affect Scale (M. T. Westbrook, 1976), the Content Analysis Scales of Psychosocial Maturity (CASPM; L.…

  3. Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.

    PubMed

    Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng

    2015-01-01

    Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.

  4. Verifying and Validating Simulation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M.

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statisticalmore » sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.« less

  5. Camera calibration: active versus passive targets

    NASA Astrophysics Data System (ADS)

    Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli

    2011-11-01

    Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.

  6. Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class

    NASA Astrophysics Data System (ADS)

    Novitasari, N.; Lukito, A.; Ekawati, R.

    2018-01-01

    A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.

  7. Study of the quality characteristics in cold-smoked salmon (Salmo salar) originating from pre- or post-rigor raw material.

    PubMed

    Birkeland, S; Akse, L

    2010-01-01

    Improved slaughtering procedures in the salmon industry have caused a delayed onset of rigor mortis and, thus, a potential for pre-rigor secondary processing. The aim of this study was to investigate the effect of rigor status at time of processing on quality traits color, texture, sensory, microbiological, in injection salted, and cold-smoked Atlantic salmon (Salmo salar). Injection of pre-rigor fillets caused a significant (P<0.001) contraction (-7.9%± 0.9%) on the caudal-cranial axis. No significant differences in instrumental color (a*, b*, C*, or h*), texture (hardness), or sensory traits (aroma, color, taste, and texture) were observed between pre- or post-rigor processed fillets; however, post-rigor (1477 ± 38 g) fillets had a significant (P>0.05) higher fracturability than pre-rigor fillets (1369 ± 71 g). Pre-rigor fillets were significantly (P<0.01) lighter, L*, (39.7 ± 1.0) than post-rigor fillets (37.8 ± 0.8) and had significantly lower (P<0.05) aerobic plate count (APC), 1.4 ± 0.4 log CFU/g against 2.6 ± 0.6 log CFU/g, and psychrotrophic count (PC), 2.1 ± 0.2 log CFU/g against 3.0 ± 0.5 log CFU/g, than post-rigor processed fillets. This study showed that similar quality characteristics can be obtained in cold-smoked products processed either pre- or post-rigor when using suitable injection salting protocols and smoking techniques. © 2010 Institute of Food Technologists®

  8. A theoretical perspective on the accuracy of rotational resonance (R 2)-based distance measurements in solid-state NMR

    NASA Astrophysics Data System (ADS)

    Pandey, Manoj Kumar; Ramachandran, Ramesh

    2010-03-01

    The application of solid-state NMR methodology for bio-molecular structure determination requires the measurement of constraints in the form of 13C-13C and 13C-15N distances, torsion angles and, in some cases, correlation of the anisotropic interactions. Since the availability of structurally important constraints in the solid state is limited due to lack of sufficient spectral resolution, the accuracy of the measured constraints become vital in studies relating the three-dimensional structure of proteins to its biological functions. Consequently, the theoretical methods employed to quantify the experimental data become important. To accentuate this aspect, we re-examine analytical two-spin models currently employed in the estimation of 13C-13C distances based on the rotational resonance (R 2) phenomenon. Although the error bars for the estimated distances tend to be in the range 0.5-1.0 Å, R 2 experiments are routinely employed in a variety of systems ranging from simple peptides to more complex amyloidogenic proteins. In this article we address this aspect by highlighting the systematic errors introduced by analytical models employing phenomenological damping terms to describe multi-spin effects. Specifically, the spin dynamics in R 2 experiments is described using Floquet theory employing two different operator formalisms. The systematic errors introduced by the phenomenological damping terms and their limitations are elucidated in two analytical models and analysed by comparing the results with rigorous numerical simulations.

  9. Using process elicitation and validation to understand and improve chemotherapy ordering and delivery.

    PubMed

    Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L

    2012-11-01

    Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.

  10. New dimension analyses with error analysis for quaking aspen and black spruce

    NASA Technical Reports Server (NTRS)

    Woods, K. D.; Botkin, D. B.; Feiveson, A. H.

    1987-01-01

    Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.

  11. Addressing the unit of analysis in medical care studies: a systematic review.

    PubMed

    Calhoun, Aaron W; Guyatt, Gordon H; Cabana, Michael D; Lu, Downing; Turner, David A; Valentine, Stacey; Randolph, Adrienne G

    2008-06-01

    We assessed the frequency that patients are incorrectly used as the unit of analysis among studies of physicians' patient care behavior in articles published in high impact journals. We surveyed 30 high-impact journals across 6 medical fields for articles susceptible to unit of analysis errors published from 1994 to 2005. Three reviewers independently abstracted articles using previously published criteria to determine the presence of analytic errors. One hundred fourteen susceptible articles were found published in 15 journals, 4 journals published the majority (71 of 114 or 62.3%) of studies, 40 were intervention studies, and 74 were noninterventional studies. The unit of analysis error was present in 19 (48%) of the intervention studies and 31 (42%) of the noninterventional studies (overall error rate 44%). The frequency of the error decreased between 1994-1999 (N = 38; 65% error) and 2000-2005 (N = 76; 33% error) (P = 0.001). Although the frequency of the error in published studies is decreasing, further improvement remains desirable.

  12. Cloud retrievals from satellite data using optimal estimation: evaluation and application to ATSR

    NASA Astrophysics Data System (ADS)

    Poulsen, C. A.; Siddans, R.; Thomas, G. E.; Sayer, A. M.; Grainger, R. G.; Campmany, E.; Dean, S. M.; Arnold, C.; Watts, P. D.

    2012-08-01

    Clouds play an important role in balancing the Earth's radiation budget. Hence, it is vital that cloud climatologies are produced that quantify cloud macro and micro physical parameters and the associated uncertainty. In this paper, we present an algorithm ORAC (Oxford-RAL retrieval of Aerosol and Cloud) which is based on fitting a physically consistent cloud model to satellite observations simultaneously from the visible to the mid-infrared, thereby ensuring that the resulting cloud properties provide both a good representation of the short-wave and long-wave radiative effects of the observed cloud. The advantages of the optimal estimation method are that it enables rigorous error propagation and the inclusion of all measurements and any a priori information and associated errors in a rigorous mathematical framework. The algorithm provides a measure of the consistency between retrieval representation of cloud and satellite radiances. The cloud parameters retrieved are the cloud top pressure, cloud optical depth, cloud effective radius, cloud fraction and cloud phase. The algorithm can be applied to most visible/infrared satellite instruments. In this paper, we demonstrate the applicability to the Along-Track Scanning Radiometers ATSR-2 and AATSR. Examples of applying the algorithm to ATSR-2 flight data are presented and the sensitivity of the retrievals assessed, in particular the algorithm is evaluated for a number of simulated single-layer and multi-layer conditions. The algorithm was found to perform well for single-layer cloud except when the cloud was very thin; i.e., less than 1 optical depths. For the multi-layer cloud, the algorithm was robust except when the upper ice cloud layer is less than five optical depths. In these cases the retrieved cloud top pressure and cloud effective radius become a weighted average of the 2 layers. The sum of optical depth of multi-layer cloud is retrieved well until the cloud becomes thick, greater than 50 optical depths, where the cloud begins to saturate. The cost proved a good indicator of multi-layer scenarios. Both the retrieval cost and the error need to be considered together in order to evaluate the quality of the retrieval. This algorithm in the configuration described here has been applied to both ATSR-2 and AATSR visible and infrared measurements in the context of the GRAPE (Global Retrieval and cloud Product Evaluation) project to produce a 14 yr consistent record for climate research.

  13. Interventions to Increase Attendance at Psychotherapy: A Meta-Analysis of Randomized Controlled Trials

    ERIC Educational Resources Information Center

    Oldham, Mary; Kellett, Stephen; Miles, Eleanor; Sheeran, Paschal

    2012-01-01

    Objective: Rates of nonattendance for psychotherapy hinder the effective delivery of evidence-based treatments. Although many strategies have been developed to increase attendance, the effectiveness of these strategies has not been quantified. Our aim in the present study was to undertake a meta-analysis of rigorously controlled studies to…

  14. A Comparative Study of Definitions on Limit and Continuity of Functions

    ERIC Educational Resources Information Center

    Shipman, Barbara A.

    2012-01-01

    Differences in definitions of limit and continuity of functions as treated in courses on calculus and in rigorous undergraduate analysis yield contradictory outcomes and unexpected language. There are results about limits in calculus that are false by the definitions of analysis, functions not continuous by one definition and continuous by…

  15. Tutoring Adolescents in Literacy: A Meta-Analysis

    ERIC Educational Resources Information Center

    Jun, Seung Won; Ramirez, Gloria; Cumming, Alister

    2010-01-01

    What does research reveal about tutoring adolescents in literacy? We conducted a meta-analysis, identifying 152 published studies, of which 12 met rigorous inclusion criteria. We analyzed the 12 studies for the effects of tutoring according to the type, focus, and amount of tutoring; the number, age, and language background of students; and the…

  16. Interactive visual analysis promotes exploration of long-term ecological data

    Treesearch

    T.N. Pham; J.A. Jones; R. Metoyer; F.J. Swanson; R.J. Pabst

    2013-01-01

    Long-term ecological data are crucial in helping ecologists understand ecosystem function and environmental change. Nevertheless, these kinds of data sets are difficult to analyze because they are usually large, multivariate, and spatiotemporal. Although existing analysis tools such as statistical methods and spreadsheet software permit rigorous tests of pre-conceived...

  17. An International Meta-Analysis of Reading Recovery

    ERIC Educational Resources Information Center

    D'Agostino, Jerome V.; Harmey, Sinéad J.

    2016-01-01

    Reading Recovery is one of the most researched literacy programs worldwide. Although there have been at least 4 quantitative reviews of its effectiveness, none have considered all rigorous group-comparison studies from all implementing nations from the late 1970s to 2015. Using a hierarchical linear modeling (HLM) v-known analysis, we examined if…

  18. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  19. SHARP ENTRYWISE PERTURBATION BOUNDS FOR MARKOV CHAINS.

    PubMed

    Thiede, Erik; VAN Koten, Brian; Weare, Jonathan

    For many Markov chains of practical interest, the invariant distribution is extremely sensitive to perturbations of some entries of the transition matrix, but insensitive to others; we give an example of such a chain, motivated by a problem in computational statistical physics. We have derived perturbation bounds on the relative error of the invariant distribution that reveal these variations in sensitivity. Our bounds are sharp, we do not impose any structural assumptions on the transition matrix or on the perturbation, and computing the bounds has the same complexity as computing the invariant distribution or computing other bounds in the literature. Moreover, our bounds have a simple interpretation in terms of hitting times, which can be used to draw intuitive but rigorous conclusions about the sensitivity of a chain to various types of perturbations.

  20. Fourier transform infrared reflectance spectra of latent fingerprints: a biometric gauge for the age of an individual.

    PubMed

    Hemmila, April; McGill, Jim; Ritter, David

    2008-03-01

    To determine if changes in fingerprint infrared spectra linear with age can be found, partial least squares (PLS1) regression of 155 fingerprint infrared spectra against the person's age was constructed. The regression produced a linear model of age as a function of spectrum with a root mean square error of calibration of less than 4 years, showing an inflection at about 25 years of age. The spectral ranges emphasized by the regression do not correspond to the highest concentration constituents of the fingerprints. Separate linear regression models for old and young people can be constructed with even more statistical rigor. The success of the regression demonstrates that a combination of constituents can be found that changes linearly with age, with a significant shift around puberty.

  1. Total Ambient Dose Equivalent Buildup Factor Determination for Nbs04 Concrete.

    PubMed

    Duckic, Paulina; Hayes, Robert B

    2018-06-01

    Buildup factors are dimensionless multiplicative factors required by the point kernel method to account for scattered radiation through a shielding material. The accuracy of the point kernel method is strongly affected by the correspondence of analyzed parameters to experimental configurations, which is attempted to be simplified here. The point kernel method has not been found to have widespread practical use for neutron shielding calculations due to the complex neutron transport behavior through shielding materials (i.e. the variety of interaction mechanisms that neutrons may undergo while traversing the shield) as well as non-linear neutron total cross section energy dependence. In this work, total ambient dose buildup factors for NBS04 concrete are calculated in terms of neutron and secondary gamma ray transmission factors. The neutron and secondary gamma ray transmission factors are calculated using MCNP6™ code with updated cross sections. Both transmission factors and buildup factors are given in a tabulated form. Practical use of neutron transmission and buildup factors warrants rigorously calculated results with all associated uncertainties. In this work, sensitivity analysis of neutron transmission factors and total buildup factors with varying water content has been conducted. The analysis showed significant impact of varying water content in concrete on both neutron transmission factors and total buildup factors. Finally, support vector regression, a machine learning technique, has been engaged to make a model based on the calculated data for calculation of the buildup factors. The developed model can predict most of the data with 20% relative error.

  2. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE PAGES

    McDonnell, J. D.; Schunck, N.; Higdon, D.; ...

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  3. Superstrate loading effects on the resonant characteristics of high Tc superconducting circular patch printed on anisotropic materials

    NASA Astrophysics Data System (ADS)

    Bedra, Sami; Bedra, Randa; Benkouda, Siham; Fortaki, Tarek

    2017-12-01

    In this paper, the effects of both anisotropies in the substrate and superstrate loading on the resonant frequency and bandwidth of high-Tc superconducting circular microstrip patch in a substrate-superstrate configuration are investigated. A rigorous analysis is performed using a dyadic Galerkin's method in the vector Hankel transform domain. Galerkin's procedure is employed in the spectral domain where the TM and TE modes of the cylindrical cavity with magnetic side walls are used in the expansion of the disk current. The effect of the superconductivity of the patch is taken into account using the concept of the complex resistive boundary condition. London's equations and the two-fluid model of Gorter and Casimir are used in the calculation of the complex surface impedance of the superconducting circular disc. The accuracy of the analysis is tested by comparing the computed results with previously published data for several anisotropic substrate-superstrate materials. Good agreement is found among all sets of results. The numerical results obtained show that important errors can be made in the computation of the resonant frequencies and bandwidths of the superconducting resonators when substrate dielectric anisotropy, and/or superstrate anisotropy are ignored. Other theoretical results obtained show that the superconducting circular microstrip patch on anisotropic substrate-superstrate with properly selected permittivity values along the optical and the non-optical axes combined with optimally chosen structural parameters is more advantageous than the one on isotropic substrate-superstrate by exhibiting wider bandwidth characteristic.

  4. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonnell, J. D.; Schunck, N.; Higdon, D.

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  5. Effect of Pre-rigor Salting Levels on Physicochemical and Textural Properties of Chicken Breast Muscles.

    PubMed

    Kim, Hyun-Wook; Hwang, Ko-Eun; Song, Dong-Heon; Kim, Yong-Jae; Ham, Youn-Kyung; Yeo, Eui-Joo; Jeong, Tae-Jun; Choi, Yun-Sang; Kim, Cheon-Jei

    2015-01-01

    This study was conducted to evaluate the effect of pre-rigor salting level (0-4% NaCl concentration) on physicochemical and textural properties of pre-rigor chicken breast muscles. The pre-rigor chicken breast muscles were de-boned 10 min post-mortem and salted within 25 min post-mortem. An increase in pre-rigor salting level led to the formation of high ultimate pH of chicken breast muscles at post-mortem 24 h. The addition of minimum of 2% NaCl significantly improved water holding capacity, cooking loss, protein solubility, and hardness when compared to the non-salting chicken breast muscle (p<0.05). On the other hand, the increase in pre-rigor salting level caused the inhibition of myofibrillar protein degradation and the acceleration of lipid oxidation. However, the difference in NaCl concentration between 3% and 4% had no great differences in the results of physicochemical and textural properties due to pre-rigor salting effects (p>0.05). Therefore, our study certified the pre-rigor salting effect of chicken breast muscle salted with 2% NaCl when compared to post-rigor muscle salted with equal NaCl concentration, and suggests that the 2% NaCl concentration is minimally required to ensure the definite pre-rigor salting effect on chicken breast muscle.

  6. Effect of Pre-rigor Salting Levels on Physicochemical and Textural Properties of Chicken Breast Muscles

    PubMed Central

    Choi, Yun-Sang

    2015-01-01

    This study was conducted to evaluate the effect of pre-rigor salting level (0-4% NaCl concentration) on physicochemical and textural properties of pre-rigor chicken breast muscles. The pre-rigor chicken breast muscles were de-boned 10 min post-mortem and salted within 25 min post-mortem. An increase in pre-rigor salting level led to the formation of high ultimate pH of chicken breast muscles at post-mortem 24 h. The addition of minimum of 2% NaCl significantly improved water holding capacity, cooking loss, protein solubility, and hardness when compared to the non-salting chicken breast muscle (p<0.05). On the other hand, the increase in pre-rigor salting level caused the inhibition of myofibrillar protein degradation and the acceleration of lipid oxidation. However, the difference in NaCl concentration between 3% and 4% had no great differences in the results of physicochemical and textural properties due to pre-rigor salting effects (p>0.05). Therefore, our study certified the pre-rigor salting effect of chicken breast muscle salted with 2% NaCl when compared to post-rigor muscle salted with equal NaCl concentration, and suggests that the 2% NaCl concentration is minimally required to ensure the definite pre-rigor salting effect on chicken breast muscle. PMID:26761884

  7. A Proposed Solution to the Problem with Using Completely Random Data to Assess the Number of Factors with Parallel Analysis

    ERIC Educational Resources Information Center

    Green, Samuel B.; Levy, Roy; Thompson, Marilyn S.; Lu, Min; Lo, Wen-Juo

    2012-01-01

    A number of psychometricians have argued for the use of parallel analysis to determine the number of factors. However, parallel analysis must be viewed at best as a heuristic approach rather than a mathematically rigorous one. The authors suggest a revision to parallel analysis that could improve its accuracy. A Monte Carlo study is conducted to…

  8. First-order approximation error analysis of Risley-prism-based beam directing system.

    PubMed

    Zhao, Yanyan; Yuan, Yan

    2014-12-01

    To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.

  9. Influence of Tooth Spacing Error on Gears With and Without Profile Modifications

    NASA Technical Reports Server (NTRS)

    Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.

    2000-01-01

    A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.

  10. Fast online inverse scattering with Reduced Basis Method (RBM) for a 3D phase grating with specific line roughness

    NASA Astrophysics Data System (ADS)

    Kleemann, Bernd H.; Kurz, Julian; Hetzler, Jochen; Pomplun, Jan; Burger, Sven; Zschiedrich, Lin; Schmidt, Frank

    2011-05-01

    Finite element methods (FEM) for the rigorous electromagnetic solution of Maxwell's equations are known to be very accurate. They possess a high convergence rate for the determination of near field and far field quantities of scattering and diffraction processes of light with structures having feature sizes in the range of the light wavelength. We are using FEM software for 3D scatterometric diffraction calculations allowing the application of a brilliant and extremely fast solution method: the reduced basis method (RBM). The RBM constructs a reduced model of the scattering problem from precalculated snapshot solutions, guided self-adaptively by an error estimator. Using RBM, we achieve an efficiency accuracy of about 10-4 compared to the direct problem with only 35 precalculated snapshots being the reduced basis dimension. This speeds up the calculation of diffraction amplitudes by a factor of about 1000 compared to the conventional solution of Maxwell's equations by FEM. This allows us to reconstruct the three geometrical parameters of our phase grating from "measured" scattering data in a 3D parameter manifold online in a minute having the full FEM accuracy available. Additionally, also a sensitivity analysis or the choice of robust measuring strategies, for example, can be done online in a few minutes.

  11. Steerable Principal Components for Space-Frequency Localized Images*

    PubMed Central

    Landa, Boris; Shkolnisky, Yoel

    2017-01-01

    As modern scientific image datasets typically consist of a large number of images of high resolution, devising methods for their accurate and efficient processing is a central research task. In this paper, we consider the problem of obtaining the steerable principal components of a dataset, a procedure termed “steerable PCA” (steerable principal component analysis). The output of the procedure is the set of orthonormal basis functions which best approximate the images in the dataset and all of their planar rotations. To derive such basis functions, we first expand the images in an appropriate basis, for which the steerable PCA reduces to the eigen-decomposition of a block-diagonal matrix. If we assume that the images are well localized in space and frequency, then such an appropriate basis is the prolate spheroidal wave functions (PSWFs). We derive a fast method for computing the PSWFs expansion coefficients from the images' equally spaced samples, via a specialized quadrature integration scheme, and show that the number of required quadrature nodes is similar to the number of pixels in each image. We then establish that our PSWF-based steerable PCA is both faster and more accurate then existing methods, and more importantly, provides us with rigorous error bounds on the entire procedure. PMID:29081879

  12. Waste in health information systems: a systematic review.

    PubMed

    Awang Kalong, Nadia; Yusof, Maryati

    2017-05-08

    Purpose The purpose of this paper is to discuss a systematic review on waste identification related to health information systems (HIS) in Lean transformation. Design/methodology/approach A systematic review was conducted on 19 studies to evaluate Lean transformation and tools used to remove waste related to HIS in clinical settings. Findings Ten waste categories were identified, along with their relationships and applications of Lean tool types related to HIS. Different Lean tools were used at the early and final stages of Lean transformation; the tool selection depended on the waste characteristic. Nine studies reported a positive impact from Lean transformation in improving daily work processes. The selection of Lean tools should be made based on the timing, purpose and characteristics of waste to be removed. Research limitations/implications Overview of waste and its category within HIS and its analysis from socio-technical perspectives enabled the identification of its root cause in a holistic and rigorous manner. Practical implications Understanding waste types, their root cause and review of Lean tools could subsequently lead to the identification of mitigation approach to prevent future error occurrence. Originality/value Specific waste models for HIS settings are yet to be developed. Hence, the identification of the waste categories could guide future implementation of Lean transformations in HIS settings.

  13. Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.

    PubMed

    Rideout, Brendan P; Dosso, Stan E; Hannay, David E

    2013-09-01

    This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.

  14. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    PubMed

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  15. Assessing reading fluency in Kenya: Oral or silent assessment?

    NASA Astrophysics Data System (ADS)

    Piper, Benjamin; Zuilkowski, Stephanie Simmons

    2015-04-01

    In recent years, the Education for All movement has focused more intensely on the quality of education, rather than simply provision. Many recent and current education quality interventions focus on literacy, which is the core skill required for further academic success. Despite this focus on the quality of literacy instruction in developing countries, little rigorous research has been conducted on critical issues of assessment. This analysis, which uses data from the Primary Math and Reading Initiative (PRIMR) in Kenya, aims to begin filling this gap by addressing a key assessment issue - should literacy assessments in Kenya be administered orally or silently? The authors compared second-grade students' scores on oral and silent reading tasks of the Early Grade Reading Assessment (EGRA) in Kiswahili and English, and found no statistically significant differences in either language. They did, however, find oral reading rates to be more strongly related to reading comprehension scores. Oral assessment has another benefit for programme evaluators - it allows for the collection of data on student errors, and therefore the calculation of words read correctly per minute, as opposed to simply words read per minute. The authors therefore recommend that, in Kenya and in similar contexts, student reading fluency be assessed via oral rather than silent assessment.

  16. The Solar X-Ray Limb

    NASA Astrophysics Data System (ADS)

    Battaglia, Marina; Hudson, Hugh S.; Hurford, Gordon J.; Krucker, Säm; Schwartz, Richard A.

    2017-07-01

    We describe a new technique to measure the height of the X-ray limb with observations from occulted X-ray flare sources as observed by the RHESSI (the Reuven Ramaty High-Energy Spectroscopic Imager) satellite. This method has model dependencies different from those present in traditional observations at optical wavelengths, which depend upon detailed modeling involving radiative transfer in a medium with complicated geometry and flows. It thus provides an independent and more rigorous measurement of the “true” solar radius, which means that of the mass distribution. RHESSI’s measurement makes use of the flare X-ray source’s spatial Fourier components (the visibilities), which are sensitive to the presence of the sharp edge at the lower boundary of the occulted source. We have found a suitable flare event for analysis, SOL2011-10-20T03:25 (M1.7), and report a first result from this novel technique here. Using a four-minute integration over the 3-25 keV photon energy range, we find {R}{{X} - {ray}}=960.11+/- 0.15+/- 0.29 arcsec, at 1 au, where the uncertainties include statistical uncertainties from the method and a systematic error. The standard VAL-C model predicts a value of 959.94 arcsec, which is about 1σ below our value.

  17. Automatic design of synthetic gene circuits through mixed integer non-linear programming.

    PubMed

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.

  18. Experimental evaluation of rigor mortis. VI. Effect of various causes of death on the evolution of rigor mortis.

    PubMed

    Krompecher, T; Bergerioux, C; Brandt-Casadevall, C; Gujer, H R

    1983-07-01

    The evolution of rigor mortis was studied in cases of nitrogen asphyxia, drowning and strangulation, as well as in fatal intoxications due to strychnine, carbon monoxide and curariform drugs, using a modified method of measurement. Our experiments demonstrated that: (1) Strychnine intoxication hastens the onset and passing of rigor mortis. (2) CO intoxication delays the resolution of rigor mortis. (3) The intensity of rigor may vary depending upon the cause of death. (4) If the stage of rigidity is to be used to estimate the time of death, it is necessary: (a) to perform a succession of objective measurements of rigor mortis intensity; and (b) to verify the eventual presence of factors that could play a role in the modification of its development.

  19. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  20. RIGOR MORTIS AND THE INFLUENCE OF CALCIUM AND MAGNESIUM SALTS UPON ITS DEVELOPMENT.

    PubMed

    Meltzer, S J; Auer, J

    1908-01-01

    Calcium salts hasten and magnesium salts retard the development of rigor mortis, that is, when these salts are administered subcutaneously or intravenously. When injected intra-arterially, concentrated solutions of both kinds of salts cause nearly an immediate onset of a strong stiffness of the muscles which is apparently a contraction, brought on by a stimulation caused by these salts and due to osmosis. This contraction, if strong, passes over without a relaxation into a real rigor. This form of rigor may be classed as work-rigor (Arbeitsstarre). In animals, at least in frogs, with intact cords, the early contraction and the following rigor are stronger than in animals with destroyed cord. If M/8 solutions-nearly equimolecular to "physiological" solutions of sodium chloride-are used, even when injected intra-arterially, calcium salts hasten and magnesium salts retard the onset of rigor. The hastening and retardation in this case as well as in the cases of subcutaneous and intravenous injections, are ion effects and essentially due to the cations, calcium and magnesium. In the rigor hastened by calcium the effects of the extensor muscles mostly prevail; in the rigor following magnesium injection, on the other hand, either the flexor muscles prevail or the muscles become stiff in the original position of the animal at death. There seems to be no difference in the degree of stiffness in the final rigor, only the onset and development of the rigor is hastened in the case of the one salt and retarded in the other. Calcium hastens also the development of heat rigor. No positive facts were obtained with regard to the effect of magnesium upon heat vigor. Calcium also hastens and magnesium retards the onset of rigor in the left ventricle of the heart. No definite data were gathered with regard to the effects of these salts upon the right ventricle.

  1. RIGOR MORTIS AND THE INFLUENCE OF CALCIUM AND MAGNESIUM SALTS UPON ITS DEVELOPMENT

    PubMed Central

    Meltzer, S. J.; Auer, John

    1908-01-01

    Calcium salts hasten and magnesium salts retard the development of rigor mortis, that is, when these salts are administered subcutaneously or intravenously. When injected intra-arterially, concentrated solutions of both kinds of salts cause nearly an immediate onset of a strong stiffness of the muscles which is apparently a contraction, brought on by a stimulation caused by these salts and due to osmosis. This contraction, if strong, passes over without a relaxation into a real rigor. This form of rigor may be classed as work-rigor (Arbeitsstarre). In animals, at least in frogs, with intact cords, the early contraction and the following rigor are stronger than in animals with destroyed cord. If M/8 solutions—nearly equimolecular to "physiological" solutions of sodium chloride—are used, even when injected intra-arterially, calcium salts hasten and magnesium salts retard the onset of rigor. The hastening and retardation in this case as well as in the cases of subcutaneous and intravenous injections, are ion effects and essentially due to the cations, calcium and magnesium. In the rigor hastened by calcium the effects of the extensor muscles mostly prevail; in the rigor following magnesium injection, on the other hand, either the flexor muscles prevail or the muscles become stiff in the original position of the animal at death. There seems to be no difference in the degree of stiffness in the final rigor, only the onset and development of the rigor is hastened in the case of the one salt and retarded in the other. Calcium hastens also the development of heat rigor. No positive facts were obtained with regard to the effect of magnesium upon heat vigor. Calcium also hastens and magnesium retards the onset of rigor in the left ventricle of the heart. No definite data were gathered with regard to the effects of these salts upon the right ventricle. PMID:19867124

  2. Attitude stability of spinning satellites

    NASA Technical Reports Server (NTRS)

    Caughey, T. K.

    1980-01-01

    Some problems of attitude stability of spinning satellites are treated in a rigorous manner. With certain restrictions, linearized stability analysis correctly predicts the attitude stability of spinning satellites, even in the critical cases of the Liapunov-Poincare stability theory.

  3. Analysis of Perfluorinated Chemicals and Their Fluorinated Precursors in Sludge: Method Development and Initial Results

    EPA Science Inventory

    A rigorous method was developed to maximize the extraction efficacy for perfluorocarboxylic acids (PFCAs), perfluorosulfonates (PFSAs), fluorotelomer alcohols (FTOHs), fluorotelomer acrylates (FTAc), perfluorosulfonamides (FOSAs), and perfluorosulfonamidoethanols (FOSEs) from was...

  4. Perspective: Optical measurement of feature dimensions and shapes by scatterometry

    NASA Astrophysics Data System (ADS)

    Diebold, Alain C.; Antonelli, Andy; Keller, Nick

    2018-05-01

    The use of optical scattering to measure feature shape and dimensions, scatterometry, is now routine during semiconductor manufacturing. Scatterometry iteratively improves an optical model structure using simulations that are compared to experimental data from an ellipsometer. These simulations are done using the rigorous coupled wave analysis for solving Maxwell's equations. In this article, we describe the Mueller matrix spectroscopic ellipsometry based scatterometry. Next, the rigorous coupled wave analysis for Maxwell's equations is presented. Following this, several example measurements are described as they apply to specific process steps in the fabrication of gate-all-around (GAA) transistor structures. First, simulations of measurement sensitivity for the inner spacer etch back step of horizontal GAA transistor processing are described. Next, the simulated metrology sensitivity for sacrificial (dummy) amorphous silicon etch back step of vertical GAA transistor processing is discussed. Finally, we present the application of plasmonically active test structures for improving the sensitivity of the measurement of metal linewidths.

  5. A rigorous approach to investigating common assumptions about disease transmission: Process algebra as an emerging modelling methodology for epidemiology.

    PubMed

    McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron

    2011-03-01

    Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.

  6. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    PubMed

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  7. Long persistence of rigor mortis at constant low temperature.

    PubMed

    Varetto, Lorenzo; Curto, Ombretta

    2005-01-06

    We studied the persistence of rigor mortis by using physical manipulation. We tested the mobility of the knee on 146 corpses kept under refrigeration at Torino's city mortuary at a constant temperature of +4 degrees C. We found a persistence of complete rigor lasting for 10 days in all the cadavers we kept under observation; and in one case, rigor lasted for 16 days. Between the 11th and the 17th days, a progressively increasing number of corpses showed a change from complete into partial rigor (characterized by partial bending of the articulation). After the 17th day, all the remaining corpses showed partial rigor and in the two cadavers that were kept under observation "à outrance" we found the absolute resolution of rigor mortis occurred on the 28th day. Our results prove that it is possible to find a persistence of rigor mortis that is much longer than the expected when environmental conditions resemble average outdoor winter temperatures in temperate zones. Therefore, this datum must be considered when a corpse is found in those environmental conditions so that when estimating the time of death, we are not misled by the long persistence of rigor mortis.

  8. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  9. Benefit of Complete State Monitoring For GPS Realtime Applications With Geo++ Gnsmart

    NASA Astrophysics Data System (ADS)

    Wübbena, G.; Schmitz, M.; Bagge, A.

    Today, the demand for precise positioning at the cm-level in realtime is worldwide growing. An indication for this is the number of operational RTK network installa- tions, which use permanent reference station networks to derive corrections for dis- tance dependent GPS errors and to supply corrections to RTK users in realtime. Gen- erally, the inter-station distances in RTK networks are selected at several tens of km in range and operational installations cover areas of up to 50000 km x km. However, the separation of the permanent reference stations can be increased to sev- eral hundred km, while a correct modeling of all error components is applied. Such networks can be termed as sparse RTK networks, which cover larger areas with a reduced number of stations. The undifferenced GPS observable is best suited for this task estimating the complete state of a permanent GPS network in a dynamic recursive Kalman filter. A rigorous adjustment of all simultaneous reference station data is re- quired. The sparse network design essentially supports the state estimation through its large spatial extension. The benefit of the approach and its state modeling of all GPS error components is a successful ambiguity resolution in realtime over long distances. The above concepts are implemented in the operational GNSMART (GNSS State Monitoring and Representation Technique) software of Geo++. It performs a state monitoring of all error components at the mm-level, because for RTK networks this accuracy is required to sufficiently represent the distance dependent errors for kine- matic applications. One key issue of the modeling is the estimation of clocks and hard- ware delays in the undifferenced approach. This pre-requisite subsequently allows for the precise separation and modeling of all other error components. Generally most of the estimated parameters are considered as nuisance parameters with respect to pure positioning tasks. As the complete state vector of GPS errors is available in a GPS realtime network, additional information besides position can be derived e.g. regional precise satellite clocks, orbits, total ionospheric electron content, tropospheric water vapor distribution, and also dynamic reference station movements. The models of GNSMART are designed to work with regional, continental or even global data. Results from GNSMART realtime networks with inter-station distances of several hundred km are presented to demonstrate the benefits of the operational implemented concepts.

  10. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A. F.; Jacobs, C. S.

    2011-01-01

    The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.

  11. Quantitative Analysis Tools and Digital Phantoms for Deformable Image Registration Quality Assurance.

    PubMed

    Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W

    2015-08-01

    This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.

  12. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  13. Error Analysis in Mathematics. Technical Report #1012

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  14. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  15. Rigor Made Easy: Getting Started

    ERIC Educational Resources Information Center

    Blackburn, Barbara R.

    2012-01-01

    Bestselling author and noted rigor expert Barbara Blackburn shares the secrets to getting started, maintaining momentum, and reaching your goals. Learn what rigor looks like in the classroom, understand what it means for your students, and get the keys to successful implementation. Learn how to use rigor to raise expectations, provide appropriate…

  16. Close Early Learning Gaps with Rigorous DAP

    ERIC Educational Resources Information Center

    Brown, Christopher P.; Mowry, Brian

    2015-01-01

    Rigorous DAP (developmentally appropriate practices) is a set of 11 principles of instruction intended to help close early childhood learning gaps. Academically rigorous learning environments create the conditions for children to learn at high levels. While academic rigor focuses on one dimension of education--academic--DAP considers the whole…

  17. Civil Rights Project's Response to "Re-Analysis" of Charter School Study

    ERIC Educational Resources Information Center

    Civil Rights Project / Proyecto Derechos Civiles, 2010

    2010-01-01

    The Civil Rights Project (CRP) was founded, in part, to bring rigorous social science inquiry to bear on the most pressing civil rights issues. On-going trends involving public school segregation have been a primary focus of the CRP's research, and the expanding policy emphasis on school choice prompted analysis of the much smaller--but…

  18. A Meta-Analysis of Single-Subject Research on Behavioral Momentum to Enhance Success in Students with Autism

    ERIC Educational Resources Information Center

    Cowan, Richard J.; Abel, Leah; Candel, Lindsay

    2017-01-01

    We conducted a meta-analysis of single-subject research studies investigating the effectiveness of antecedent strategies grounded in behavioral momentum for improving compliance and on-task performance for students with autism. First, we assessed the research rigor of those studies meeting our inclusionary criteria. Next, in order to apply a…

  19. A Review of the Application of Lifecycle Analysis to Renewable Energy Systems

    ERIC Educational Resources Information Center

    Lund, Chris; Biswas, Wahidul

    2008-01-01

    The lifecycle concept is a "cradle to grave" approach to thinking about products, processes, and services, recognizing that all stages have environmental and economic impacts. Any rigorous and meaningful comparison of energy supply options must be done using a lifecycle analysis approach. It has been applied to an increasing number of conventional…

  20. Considerations for the Systematic Analysis and Use of Single-Case Research

    ERIC Educational Resources Information Center

    Horner, Robert H.; Swaminathan, Hariharan; Sugai, George; Smolkowski, Keith

    2012-01-01

    Single-case research designs provide a rigorous research methodology for documenting experimental control. If single-case methods are to gain wider application, however, a need exists to define more clearly (a) the logic of single-case designs, (b) the process and decision rules for visual analysis, and (c) an accepted process for integrating…

Top