Fellin, Francesco; Righetto, Roberto; Fava, Giovanni; Trevisan, Diego; Amelio, Dante; Farace, Paolo
2017-03-01
To investigate the range errors made in treatment planning due to the presence of the immobilization devices along the proton beam path. The measured water equivalent thickness (WET) of selected devices was measured by a high-energy spot and a multi-layer ionization chamber and compared with that predicted by treatment planning system (TPS). Two treatment couches, two thermoplastic masks (both un-stretched and stretched) and one headrest were selected. At TPS, every immobilization device was modelled as being part of the patient. The following parameters were assessed: CT acquisition protocol, dose-calculation grid-sizes (1.5 and 3.0mm) and beam-entrance with respect to the devices (coplanar and non-coplanar). Finally, the potential errors produced by a wrong manual separation between treatment couch and the CT table (not present during treatment) were investigated. In the thermoplastic mask, there was a clear effect due to beam entrance, a moderate effect due to the CT protocols and almost no effect due to TPS grid-size, with 1mm errors observed only when thick un-stretched portions were crossed by non-coplanar beams. In the treatment couches the WET errors were negligible (<0.3mm) regardless of the grid-size and CT protocol. The potential range errors produced in the manual separation between treatment couch and CT table were small with 1.5mm grid-size, but could be >0.5mm with a 3.0mm grid-size. In the headrest, WET errors were negligible (0.2mm). With only one exception (un-stretched mask, non-coplanar beams), the WET of all the immobilization devices was properly modelled by the TPS. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Finite-difference modeling with variable grid-size and adaptive time-step in porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Wu, Guochen
2014-04-01
Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.
2007-12-21
2.4 Implementation of non-uniform gridsize The numerical method has been extended to allow non-uniform gridsizes in x and y direction, though the...and the vertical excursion of the swash motion A is expressed as 0.125 / 0 inaA sT g h π = . Figure 3 and 4 compare the XBeach results with the...A. Van Gent, A. J. H. M. Reniers, and D. J. R. Walstra (2008), Analysis of dune erosion processes in large scale flume experiments, submitted to
Fast and accurate 3D tensor calculation of the Fock operator in a general basis
NASA Astrophysics Data System (ADS)
Khoromskaia, V.; Andrae, D.; Khoromskij, B. N.
2012-11-01
The present paper contributes to the construction of a “black-box” 3D solver for the Hartree-Fock equation by the grid-based tensor-structured methods. It focuses on the calculation of the Galerkin matrices for the Laplace and the nuclear potential operators by tensor operations using the generic set of basis functions with low separation rank, discretized on a fine N×N×N Cartesian grid. We prove the Ch2 error estimate in terms of mesh parameter, h=O(1/N), that allows to gain a guaranteed accuracy of the core Hamiltonian part in the Fock operator as h→0. However, the commonly used problem adapted basis functions have low regularity yielding a considerable increase of the constant C, hence, demanding a rather large grid-size N of about several tens of thousands to ensure the high resolution. Modern tensor-formatted arithmetics of complexity O(N), or even O(logN), practically relaxes the limitations on the grid-size. Our tensor-based approach allows to improve significantly the standard basis sets in quantum chemistry by including simple combinations of Slater-type, local finite element and other basis functions. Numerical experiments for moderate size organic molecules show efficiency and accuracy of grid-based calculations to the core Hamiltonian in the range of grid parameter N3˜1015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badkul, R; Nejaiman, S; Pokhrel, D
2015-06-15
Purpose: Skin dose can be the limiting factor and fairly common reason to interrupt the treatment, especially for treating head-and-neck with Intensity-modulated-radiation-therapy(IMRT) or Volumetrically-modulated - arc-therapy (VMAT) and breast with tangentially-directed-beams. Aim of this study was to investigate accuracy of near-surface dose predicted by Eclipse treatment-planning-system (TPS) using Anisotropic-Analytic Algorithm (AAA)with varying calculation grid-size and comparing with metal-oxide-semiconductor-field-effect-transistors(MOSFETs)measurements for a range of clinical-conditions (open-field,dynamic-wedge, physical-wedge, IMRT,VMAT). Methods: QUASAR™-Body-Phantom was used in this study with oval curved-surfaces to mimic breast, chest wall and head-and-neck sites.A CT-scan was obtained with five radio-opaque markers(ROM) placed on the surface of phantom to mimic themore » range of incident angles for measurements and dose prediction using 2mm slice thickness.At each ROM, small structure(1mmx2mm) were contoured to obtain mean-doses from TPS.Calculations were performed for open-field,dynamic-wedge,physical-wedge,IMRT and VMAT using Varian-21EX,6&15MV photons using twogrid-sizes:2.5mm and 1mm.Calibration checks were performed to ensure that MOSFETs response were within ±5%.Surface-doses were measured at five locations and compared with TPS calculations. Results: For 6MV: 2.5mm grid-size,mean calculated doses(MCD)were higher by 10%(±7.6),10%(±7.6),20%(±8.5),40%(±7.5),30%(±6.9) and for 1mm grid-size MCD were higher by 0%(±5.7),0%(±4.2),0%(±5.5),1.2%(±5.0),1.1% (±7.8) for open-field,dynamic-wedge,physical-wedge,IMRT,VMAT respectively.For 15MV: 2.5mm grid-size,MCD were higher by 30%(±14.6),30%(±14.6),30%(±14.0),40%(±11.0),30%(±3.5)and for 1mm grid-size MCD were higher by 10% (±10.6), 10%(±9.8),10%(±8.0),30%(±7.8),10%(±3.8) for open-field, dynamic-wedge, physical-wedge, IMRT, VMAT respectively.For 6MV, 86% and 56% of all measured values agreed better than ±20% for 1mm and 2.5mm grid-sizes respectively. For 18MV, 56% and 18% of all measured-values agreed better than ±20% for 1mm and 2.5mm grid-sizes respectively. Conclusion: Reliable Skin-dose calculations by TPS can be very difficult due to steep dose-gradient and inaccurate beam-modelling in buildup region.Our results showed that Eclipse over-estimates surface-dose.Impact of grid-size is also significant,surface-dose increased up to 40% from 1mm to 2.5mm,however, 1mm calculated-values closely agrees with measurements. Due to large uncertnities in skin-dose predictions from TPS, outmost caution must be exercised when skin dose is evaluated,a sufficiently smaller grid-size(1mm)can improve the accuracy and MOSFETs can be used for verification.« less
Faster and more accurate transport procedures for HZETRN
NASA Astrophysics Data System (ADS)
Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.
2010-12-01
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
Faster and more accurate transport procedures for HZETRN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less
Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions
Mehl, S.; Hill, M.C.
2010-01-01
This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.
NASA Technical Reports Server (NTRS)
Vogel, Bernhard; Vogel, Heike; Fiedler, Franz
1994-01-01
A model system is presented that takes into account the main physical and chemical processes on the regional scale here in an area of 100x100 sq km. The horizontal gridsize used is 2x2 sq km. For a case study, it is demonstrated how the model system can be used to separate the contributions of the processes advection, turbulent diffusion, and chemical reactions to the diurnal cycle of ozone. In this way, typical features which are visible in observations and are reproduced by the numerical simulations can be interpreted.
Multidimensional radiative transfer with multilevel atoms. II. The non-linear multigrid method.
NASA Astrophysics Data System (ADS)
Fabiani Bendicho, P.; Trujillo Bueno, J.; Auer, L.
1997-08-01
A new iterative method for solving non-LTE multilevel radiative transfer (RT) problems in 1D, 2D or 3D geometries is presented. The scheme obtains the self-consistent solution of the kinetic and RT equations at the cost of only a few (<10) formal solutions of the RT equation. It combines, for the first time, non-linear multigrid iteration (Brandt, 1977, Math. Comp. 31, 333; Hackbush, 1985, Multi-Grid Methods and Applications, springer-Verlag, Berlin), an efficient multilevel RT scheme based on Gauss-Seidel iterations (cf. Trujillo Bueno & Fabiani Bendicho, 1995ApJ...455..646T), and accurate short-characteristics formal solution techniques. By combining a valid stopping criterion with a nested-grid strategy a converged solution with the desired true error is automatically guaranteed. Contrary to the current operator splitting methods the very high convergence speed of the new RT method does not deteriorate when the grid spatial resolution is increased. With this non-linear multigrid method non-LTE problems discretized on N grid points are solved in O(N) operations. The nested multigrid RT method presented here is, thus, particularly attractive in complicated multilevel transfer problems where small grid-sizes are required. The properties of the method are analyzed both analytically and with illustrative multilevel calculations for Ca II in 1D and 2D schematic model atmospheres.
NASA Astrophysics Data System (ADS)
Nasta, Paolo; Penna, Daniele; Brocca, Luca; Zuecco, Giulia; Romano, Nunzio
2018-02-01
Indirect measurements of field-scale (hectometer grid-size) spatial-average near-surface soil moisture are becoming increasingly available by exploiting new-generation ground-based and satellite sensors. Nonetheless, modeling applications for water resources management require knowledge of plot-scale (1-5 m grid-size) soil moisture by using measurements through spatially-distributed sensor network systems. Since efforts to fulfill such requirements are not always possible due to time and budget constraints, alternative approaches are desirable. In this study, we explore the feasibility of determining spatial-average soil moisture and soil moisture patterns given the knowledge of long-term records of climate forcing data and topographic attributes. A downscaling approach is proposed that couples two different models: the Eco-Hydrological Bucket and Equilibrium Moisture from Topography. This approach helps identify the relative importance of two compound topographic indexes in explaining the spatial variation of soil moisture patterns, indicating valley- and hillslope-dependence controlled by lateral flow and radiative processes, respectively. The integrated model also detects temporal instability if the dominant type of topographic dependence changes with spatial-average soil moisture. Model application was carried out at three sites in different parts of Italy, each characterized by different environmental conditions. Prior calibration was performed by using sparse and sporadic soil moisture values measured by portable time domain reflectometry devices. Cross-site comparisons offer different interpretations in the explained spatial variation of soil moisture patterns, with time-invariant valley-dependence (site in northern Italy) and hillslope-dependence (site in southern Italy). The sources of soil moisture spatial variation at the site in central Italy are time-variant within the year and the seasonal change of topographic dependence can be conveniently correlated to a climate indicator such as the aridity index.
NASA Astrophysics Data System (ADS)
Khoromskaia, Venera; Khoromskij, Boris N.
2014-12-01
Our recent method for low-rank tensor representation of sums of the arbitrarily positioned electrostatic potentials discretized on a 3D Cartesian grid reduces the 3D tensor summation to operations involving only 1D vectors however retaining the linear complexity scaling in the number of potentials. Here, we introduce and study a novel tensor approach for fast and accurate assembled summation of a large number of lattice-allocated potentials represented on 3D N × N × N grid with the computational requirements only weakly dependent on the number of summed potentials. It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the Newton kernel. For a sum of electrostatic potentials over L × L × L lattice embedded in a box the required storage scales linearly in the 1D grid-size, O(N) , while the numerical cost is estimated by O(NL) . For periodic boundary conditions, the storage demand remains proportional to the 1D grid-size of a unit cell, n = N / L, while the numerical cost reduces to O(N) , that outperforms the FFT-based Ewald-type summation algorithms of complexity O(N3 log N) . The complexity in the grid parameter N can be reduced even to the logarithmic scale O(log N) by using data-sparse representation of canonical N-vectors via the quantics tensor approximation. For justification, we prove an upper bound on the quantics ranks for the canonical vectors in the overall lattice sum. The presented approach is beneficial in applications which require further functional calculus with the lattice potential, say, scalar product with a function, integration or differentiation, which can be performed easily in tensor arithmetics on large 3D grids with 1D cost. Numerical tests illustrate the performance of the tensor summation method and confirm the estimated bounds on the tensor ranks.
2013-01-01
Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411
Awareness of technology-induced errors and processes for identifying and preventing such errors.
Bellwood, Paule; Borycki, Elizabeth M; Kushniruk, Andre W
2015-01-01
There is a need to determine if organizations working with health information technology are aware of technology-induced errors and how they are addressing and preventing them. The purpose of this study was to: a) determine the degree of technology-induced error awareness in various Canadian healthcare organizations, and b) identify those processes and procedures that are currently in place to help address, manage, and prevent technology-induced errors. We identified a lack of technology-induced error awareness among participants. Participants identified there was a lack of well-defined procedures in place for reporting technology-induced errors, addressing them when they arise, and preventing them.
Methods for Addressing Technology-induced Errors: The Current State.
Borycki, E; Dexheimer, J W; Hullin Lucay Cossio, C; Gong, Y; Jensen, S; Kaipio, J; Kennebeck, S; Kirkendall, E; Kushniruk, A W; Kuziemsky, C; Marcilly, R; Röhrig, R; Saranto, K; Senathirajah, Y; Weber, J; Takeda, H
2016-11-10
The objectives of this paper are to review and discuss the methods that are being used internationally to report on, mitigate, and eliminate technology-induced errors. The IMIA Working Group for Health Informatics for Patient Safety worked together to review and synthesize some of the main methods and approaches associated with technology- induced error reporting, reduction, and mitigation. The work involved a review of the evidence-based literature as well as guideline publications specific to health informatics. The paper presents a rich overview of current approaches, issues, and methods associated with: (1) safe HIT design, (2) safe HIT implementation, (3) reporting on technology-induced errors, (4) technology-induced error analysis, and (5) health information technology (HIT) risk management. The work is based on research from around the world. Internationally, researchers have been developing methods that can be used to identify, report on, mitigate, and eliminate technology-induced errors. Although there remain issues and challenges associated with the methodologies, they have been shown to improve the quality and safety of HIT. Since the first publications documenting technology-induced errors in healthcare in 2005, we have seen in a short 10 years researchers develop ways of identifying and addressing these types of errors. We have also seen organizations begin to use these approaches. Knowledge has been translated into practice in a short ten years whereas the norm for other research areas is of 20 years.
Methods for Addressing Technology-Induced Errors: The Current State
Dexheimer, J. W.; Hullin Lucay Cossio, C.; Gong, Y.; Jensen, S.; Kaipio, J.; Kennebeck, S.; Kirkendall, E.; Kushniruk, A. W.; Kuziemsky, C.; Marcilly, R.; Röhrig, R.; Saranto, K.; Senathirajah, Y.; Weber, J.; Takeda, H.
2016-01-01
Summary Objectives The objectives of this paper are to review and discuss the methods that are being used internationally to report on, mitigate, and eliminate technology-induced errors. Methods The IMIA Working Group for Health Informatics for Patient Safety worked together to review and synthesize some of the main methods and approaches associated with technology-induced error reporting, reduction, and mitigation. The work involved a review of the evidence-based literature as well as guideline publications specific to health informatics. Results The paper presents a rich overview of current approaches, issues, and methods associated with: (1) safe HIT design, (2) safe HIT implementation, (3) reporting on technology-induced errors, (4) technology-induced error analysis, and (5) health information technology (HIT) risk management. The work is based on research from around the world. Conclusions Internationally, researchers have been developing methods that can be used to identify, report on, mitigate, and eliminate technology-induced errors. Although there remain issues and challenges associated with the methodologies, they have been shown to improve the quality and safety of HIT. Since the first publications documenting technology-induced errors in healthcare in 2005, we have seen in a short 10 years researchers develop ways of identifying and addressing these types of errors. We have also seen organizations begin to use these approaches. Knowledge has been translated into practice in a short ten years whereas the norm for other research areas is of 20 years. PMID:27830228
NASA Technical Reports Server (NTRS)
Miller, J. M.
1980-01-01
ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.
Reduction of Orifice-Induced Pressure Errors
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.
1987-01-01
Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.
SEU induced errors observed in microprocessor systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asenek, V.; Underwood, C.; Oldfield, M.
In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.
Prediction error induced motor contagions in human behaviors.
Ikegami, Tsuyoshi; Ganesh, Gowrishankar; Takeuchi, Tatsuya; Nakamoto, Hiroki
2018-05-29
Motor contagions refer to implicit effects on one's actions induced by observed actions. Motor contagions are believed to be induced simply by action observation and cause an observer's action to become similar to the action observed. In contrast, here we report a new motor contagion that is induced only when the observation is accompanied by prediction errors - differences between actions one observes and those he/she predicts or expects. In two experiments, one on whole-body baseball pitching and another on simple arm reaching, we show that the observation of the same action induces distinct motor contagions, depending on whether prediction errors are present or not. In the absence of prediction errors, as in previous reports, participants' actions changed to become similar to the observed action, while in the presence of prediction errors, their actions changed to diverge away from it, suggesting distinct effects of action observation and action prediction on human actions. © 2018, Ikegami et al.
Multipath induced errors in meteorological Doppler/interferometer location systems
NASA Technical Reports Server (NTRS)
Wallace, R. G.
1984-01-01
One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.
Towards a Framework for Managing Risk Associated with Technology-Induced Error.
Borycki, Elizabeth M; Kushniruk, Andre W
2017-01-01
Health information technologies (HIT) promised to streamline and modernize healthcare processes. However, a growing body of research has indicated that if such technologies are not designed, implemented or maintained properly this may lead to an increased incidence of new types of errors which the authors have referred to as "technology-induced errors". In this paper, framework is presented that can be used to manage HIT risk. The framework considers the reduction of technology-induced errors at different stages by managing risks associated with the implementation of HIT. Frameworks that allow health information technology managers to employ proactive and preventative approaches that can be used to manage the risks associated with technology-induced errors are critical to improving HIT safety and managing risk associated with implementing new technologies.
Proton upsets in LSI memories in space
NASA Technical Reports Server (NTRS)
Mcnulty, P. J.; Wyatt, R. C.; Filz, R. C.; Rothwell, P. L.; Farrell, G. E.
1980-01-01
Two types of large scale integrated dynamic random access memory devices were tested and found to be subject to soft errors when exposed to protons incident at energies between 18 and 130 MeV. These errors are shown to differ significantly from those induced in the same devices by alphas from an Am-241 source. There is considerable variation among devices in their sensitivity to proton-induced soft errors, even among devices of the same type. For protons incident at 130 MeV, the soft error cross sections measured in these experiments varied from 10 to the -8th to 10 to the -6th sq cm/proton. For individual devices, however, the soft error cross section consistently increased with beam energy from 18-130 MeV. Analysis indicates that the soft errors induced by energetic protons result from spallation interactions between the incident protons and the nuclei of the atoms comprising the device. Because energetic protons are the most numerous of both the galactic and solar cosmic rays and form the inner radiation belt, proton-induced soft errors have potentially serious implications for many electronic systems flown in space.
Pointing error analysis of Risley-prism-based beam steering system.
Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng
2014-09-01
Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.
Temperature Dependence of Faraday Effect-Induced Bias Error in a Fiber Optic Gyroscope
Li, Xuyou; Guang, Xingxing; Xu, Zhenlong; Li, Guangchun
2017-01-01
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environments, such as magnetic field and temperature field variation, is necessary for its practical applications. This paper presents an investigation of Faraday effect-induced bias error of IFOG under varying temperature. Jones matrix method is utilized to formulize the temperature dependence of Faraday effect-induced bias error. Theoretical results show that the Faraday effect-induced bias error changes with the temperature in the non-skeleton polarization maintaining (PM) fiber coil. This phenomenon is caused by the temperature dependence of linear birefringence and Verdet constant of PM fiber. Particularly, Faraday effect-induced bias errors of two polarizations always have opposite signs that can be compensated optically regardless of the changes of the temperature. Two experiments with a 1000 m non-skeleton PM fiber coil are performed, and the experimental results support these theoretical predictions. This study is promising for improving the bias stability of IFOG. PMID:28880203
Temperature Dependence of Faraday Effect-Induced Bias Error in a Fiber Optic Gyroscope.
Li, Xuyou; Liu, Pan; Guang, Xingxing; Xu, Zhenlong; Guan, Lianwu; Li, Guangchun
2017-09-07
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environments, such as magnetic field and temperature field variation, is necessary for its practical applications. This paper presents an investigation of Faraday effect-induced bias error of IFOG under varying temperature. Jones matrix method is utilized to formulize the temperature dependence of Faraday effect-induced bias error. Theoretical results show that the Faraday effect-induced bias error changes with the temperature in the non-skeleton polarization maintaining (PM) fiber coil. This phenomenon is caused by the temperature dependence of linear birefringence and Verdet constant of PM fiber. Particularly, Faraday effect-induced bias errors of two polarizations always have opposite signs that can be compensated optically regardless of the changes of the temperature. Two experiments with a 1000 m non-skeleton PM fiber coil are performed, and the experimental results support these theoretical predictions. This study is promising for improving the bias stability of IFOG.
Anomalous annealing of floating gate errors due to heavy ion irradiation
NASA Astrophysics Data System (ADS)
Yin, Yanan; Liu, Jie; Sun, Youmei; Hou, Mingdong; Liu, Tianqi; Ye, Bing; Ji, Qinggang; Luo, Jie; Zhao, Peixiong
2018-03-01
Using the heavy ions provided by the Heavy Ion Research Facility in Lanzhou (HIRFL), the annealing of heavy-ion induced floating gate (FG) errors in 34 nm and 25 nm NAND Flash memories has been studied. The single event upset (SEU) cross section of FG and the evolution of the errors after irradiation depending on the ion linear energy transfer (LET) values, data pattern and feature size of the device are presented. Different rates of annealing for different ion LET and different pattern are observed in 34 nm and 25 nm memories. The variation of the percentage of different error patterns in 34 nm and 25 nm memories with annealing time shows that the annealing of FG errors induced by heavy-ion in memories will mainly take place in the cells directly hit under low LET ion exposure and other cells affected by heavy ions when the ion LET is higher. The influence of Multiple Cell Upsets (MCUs) on the annealing of FG errors is analyzed. MCUs with high error multiplicity which account for the majority of the errors can induce a large percentage of annealed errors.
Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.
2004-01-01
Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.
Analysis of phase error effects in multishot diffusion-prepared turbo spin echo imaging
Cervantes, Barbara; Kooijman, Hendrik; Karampinos, Dimitrios C.
2017-01-01
Background To characterize the effect of phase errors on the magnitude and the phase of the diffusion-weighted (DW) signal acquired with diffusion-prepared turbo spin echo (dprep-TSE) sequences. Methods Motion and eddy currents were identified as the main sources of phase errors. An analytical expression for the effect of phase errors on the acquired signal was derived and verified using Bloch simulations, phantom, and in vivo experiments. Results Simulations and experiments showed that phase errors during the diffusion preparation cause both magnitude and phase modulation on the acquired data. When motion-induced phase error (MiPe) is accounted for (e.g., with motion-compensated diffusion encoding), the signal magnitude modulation due to the leftover eddy-current-induced phase error cannot be eliminated by the conventional phase cycling and sum-of-squares (SOS) method. By employing magnitude stabilizers, the phase-error-induced magnitude modulation, regardless of its cause, was removed but the phase modulation remained. The in vivo comparison between pulsed gradient and flow-compensated diffusion preparations showed that MiPe needed to be addressed in multi-shot dprep-TSE acquisitions employing magnitude stabilizers. Conclusions A comprehensive analysis of phase errors in dprep-TSE sequences showed that magnitude stabilizers are mandatory in removing the phase error induced magnitude modulation. Additionally, when multi-shot dprep-TSE is employed the inconsistent signal phase modulation across shots has to be resolved before shot-combination is performed. PMID:28516049
Radiation-Hardened Solid-State Drive
NASA Technical Reports Server (NTRS)
Sheldon, Douglas J.
2010-01-01
A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.
NASA Astrophysics Data System (ADS)
Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim
2017-09-01
Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.
Cheng, G.; Hu, X. H.; Choi, K. S.; ...
2017-07-08
Ductile fracture is a local phenomenon, and it is well established that fracture strain levels depend on both stress triaxiality and the resolution (grid size) of strain measurements. Two-dimensional plane strain post-necking models with different model sizes are used in this paper to predict the grid-size-dependent fracture strain of a commercial dual-phase steel, DP980. The models are generated from the actual microstructures, and the individual phase flow properties and literature-based individual phase damage parameters for the Johnson–Cook model are used for ferrite and martensite. A monotonic relationship is predicted: the smaller the model size, the higher the fracture strain. Thus,more » a general framework is developed to quantify the grid-size-dependent fracture strains for multiphase materials. In addition to the grid-size dependency, the influences of intrinsic microstructure features, i.e., the flow curve and fracture strains of the two constituent phases, on the predicted fracture strains also are examined. Finally, application of the derived fracture strain versus model size relationship is demonstrated with large clearance trimming simulations with different element sizes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, G.; Hu, X. H.; Choi, K. S.
Ductile fracture is a local phenomenon, and it is well established that fracture strain levels depend on both stress triaxiality and the resolution (grid size) of strain measurements. Two-dimensional plane strain post-necking models with different model sizes are used in this paper to predict the grid-size-dependent fracture strain of a commercial dual-phase steel, DP980. The models are generated from the actual microstructures, and the individual phase flow properties and literature-based individual phase damage parameters for the Johnson–Cook model are used for ferrite and martensite. A monotonic relationship is predicted: the smaller the model size, the higher the fracture strain. Thus,more » a general framework is developed to quantify the grid-size-dependent fracture strains for multiphase materials. In addition to the grid-size dependency, the influences of intrinsic microstructure features, i.e., the flow curve and fracture strains of the two constituent phases, on the predicted fracture strains also are examined. Finally, application of the derived fracture strain versus model size relationship is demonstrated with large clearance trimming simulations with different element sizes.« less
NASA Astrophysics Data System (ADS)
Kipp, Dylan; Ganesan, Venkat
2013-06-01
We develop a kinetic Monte Carlo model for photocurrent generation in organic solar cells that demonstrates improved agreement with experimental illuminated and dark current-voltage curves. In our model, we introduce a charge injection rate prefactor to correct for the electrode grid-size and electrode charge density biases apparent in the coarse-grained approximation of the electrode as a grid of single occupancy, charge-injecting reservoirs. We use the charge injection rate prefactor to control the portion of dark current attributed to each of four kinds of charge injection. By shifting the dark current between electrode-polymer pairs, we align the injection timescales and expand the applicability of the method to accommodate ohmic energy barriers. We consider the device characteristics of the ITO/PEDOT/PSS:PPDI:PBTT:Al system and demonstrate the manner in which our model captures the device charge densities unique to systems with small injection energy barriers. To elucidate the defining characteristics of our model, we first demonstrate the manner in which charge accumulation and band bending affect the shape and placement of the various current-voltage regimes. We then discuss the influence of various model parameters upon the current-voltage characteristics.
[Relations between health information systems and patient safety].
Nøhr, Christian
2012-11-05
Health information systems have the potential to reduce medical errors, and indeed many studies have shown a significant reduction. However, if the systems are not designed and implemented properly, there is evidence that suggest that new types of errors will arise--i.e., technology-induced errors. Health information systems will need to undergo a more rigorous evaluation. Usability evaluation and simulation test with humans in the loop can help to detect and prevent technology-induced errors before they are deployed in real health-care settings.
Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L
2010-08-23
We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.
Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.
2010-01-01
We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237
Unreliable numbers: error and harm induced by bad design can be reduced by better design
Thimbleby, Harold; Oladimeji, Patrick; Cairns, Paul
2015-01-01
Number entry is a ubiquitous activity and is often performed in safety- and mission-critical procedures, such as healthcare, science, finance, aviation and in many other areas. We show that Monte Carlo methods can quickly and easily compare the reliability of different number entry systems. A surprising finding is that many common, widely used systems are defective, and induce unnecessary human error. We show that Monte Carlo methods enable designers to explore the implications of normal and unexpected operator behaviour, and to design systems to be more resilient to use error. We demonstrate novel designs with improved resilience, implying that the common problems identified and the errors they induce are avoidable. PMID:26354830
NASA Astrophysics Data System (ADS)
Zhang, Kuiyuan; Umehara, Shigehiro; Yamaguchi, Junki; Furuta, Jun; Kobayashi, Kazutoshi
2016-08-01
This paper analyzes how body bias and BOX region thickness affect soft error rates in 65-nm SOTB (Silicon on Thin BOX) and 28-nm UTBB (Ultra Thin Body and BOX) FD-SOI processes. Soft errors are induced by alpha-particle and neutron irradiation and the results are then analyzed by Monte Carlo based simulation using PHITS-TCAD. The alpha-particle-induced single event upset (SEU) cross-section and neutron-induced soft error rate (SER) obtained by simulation are consistent with measurement results. We clarify that SERs decreased in response to an increase in the BOX thickness for SOTB while SERs in UTBB are independent of BOX thickness. We also discover SOTB develops a higher tolerance to soft errors when reverse body bias is applied while UTBB become more susceptible.
Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.
Van, Anh T; Hernando, Diego; Sutton, Bradley P
2011-11-01
A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.
Visuomotor adaptation needs a validation of prediction error by feedback error
Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle
2014-01-01
The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.
Orifice-induced pressure error studies in Langley 7- by 10-foot high-speed tunnel
NASA Technical Reports Server (NTRS)
Plentovich, E. B.; Gloss, B. B.
1986-01-01
For some time it has been known that the presence of a static pressure measuring hole will disturb the local flow field in such a way that the sensed static pressure will be in error. The results of previous studies aimed at studying the error induced by the pressure orifice were for relatively low Reynolds number flows. Because of the advent of high Reynolds number transonic wind tunnels, a study was undertaken to assess the magnitude of this error at high Reynolds numbers than previously published and to study a possible method of eliminating this pressure error. This study was conducted in the Langley 7- by 10-Foot High-Speed Tunnel on a flat plate. The model was tested at Mach numbers from 0.40 to 0.72 and at Reynolds numbers from 7.7 x 1,000,000 to 11 x 1,000,000 per meter (2.3 x 1,000,000 to 3.4 x 1,000,000 per foot), respectively. The results indicated that as orifice size increased, the pressure error also increased but that a porous metal (sintered metal) plug inserted in an orifice could greatly reduce the pressure error induced by the orifice.
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
A practical method of estimating standard error of age in the fission track dating method
Johnson, N.M.; McGee, V.E.; Naeser, C.W.
1979-01-01
A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.
Error Analysis and Validation for Insar Height Measurement Induced by Slant Range
NASA Astrophysics Data System (ADS)
Zhang, X.; Li, T.; Fan, W.; Geng, X.
2018-04-01
InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.
Regulation of error-prone translesion synthesis by Spartan/C1orf124
Kim, Myoung Shin; Machida, Yuka; Vashisht, Ajay A.; Wohlschlegel, James A.; Pang, Yuan-Ping; Machida, Yuichi J.
2013-01-01
Translesion synthesis (TLS) employs low fidelity polymerases to replicate past damaged DNA in a potentially error-prone process. Regulatory mechanisms that prevent TLS-associated mutagenesis are unknown; however, our recent studies suggest that the PCNA-binding protein Spartan plays a role in suppression of damage-induced mutagenesis. Here, we show that Spartan negatively regulates error-prone TLS that is dependent on POLD3, the accessory subunit of the replicative DNA polymerase Pol δ. We demonstrate that the putative zinc metalloprotease domain SprT in Spartan directly interacts with POLD3 and contributes to suppression of damage-induced mutagenesis. Depletion of Spartan induces complex formation of POLD3 with Rev1 and the error-prone TLS polymerase Pol ζ, and elevates mutagenesis that relies on POLD3, Rev1 and Pol ζ. These results suggest that Spartan negatively regulates POLD3 function in Rev1/Pol ζ-dependent TLS, revealing a previously unrecognized regulatory step in error-prone TLS. PMID:23254330
Li, Beiwen; Liu, Ziping; Zhang, Song
2016-10-03
We propose a hybrid computational framework to reduce motion-induced measurement error by combining the Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP). The proposed method is composed of three major steps: Step 1 is to extract continuous relative phase maps for each isolated object with single-shot FTP method and spatial phase unwrapping; Step 2 is to obtain an absolute phase map of the entire scene using PSP method, albeit motion-induced errors exist on the extracted absolute phase map; and Step 3 is to shift the continuous relative phase maps from Step 1 to generate final absolute phase maps for each isolated object by referring to the absolute phase map with error from Step 2. Experiments demonstrate the success of the proposed computational framework for measuring multiple isolated rapidly moving objects.
IMRT QA: Selecting gamma criteria based on error detection sensitivity.
Steers, Jennifer M; Fraass, Benedick A
2016-04-01
The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less
Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander
2015-04-01
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.
Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher
2015-01-01
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, T; Kumaraswamy, L
Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10more » CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.« less
Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs
NASA Astrophysics Data System (ADS)
Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.
1982-12-01
Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.
Error-Eliciting Problems: Fostering Understanding and Thinking
ERIC Educational Resources Information Center
Lim, Kien H.
2014-01-01
Student errors are springboards for analyzing, reasoning, and justifying. The mathematics education community recognizes the value of student errors, noting that "mistakes are seen not as dead ends but rather as potential avenues for learning." To induce specific errors and help students learn, choose tasks that might produce mistakes.…
Effects of Reynolds number on orifice induced pressure error
NASA Technical Reports Server (NTRS)
Plentovich, E. B.; Gloss, B. B.
1982-01-01
Data previously reported for orifice induced pressure errors are extended to the case of higher Reynolds number flows, and a remedy is presented in the form of a porous metal plug for the orifice. Test orifices with apertures 0.330, 0.660, and 1.321 cm in diam. were fabricated on a flat plate for trials in the NASA Langley wind tunnel at Mach numbers 0.40-0.72. A boundary layer survey rake was also mounted on the flat plate to allow measurement of the total boundary layer pressures at the orifices. At the high Reynolds number flows studied, the orifice induced pressure error was found to be a function of the ratio of the orifice diameter to the boundary layer thickness. The error was effectively eliminated by the insertion of a porous metal disc set flush with the orifice outside surface.
Adaptation to sensory-motor reflex perturbations is blind to the source of errors.
Hudson, Todd E; Landy, Michael S
2012-01-06
In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.
Development of an errorable car-following driver model
NASA Astrophysics Data System (ADS)
Yang, H.-H.; Peng, H.
2010-06-01
An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.
Wave aberrations in rhesus monkeys with vision-induced ametropias
Ramamirtham, Ramkumar; Kee, Chea-su; Hung, Li-Fang; Qiao-Grider, Ying; Huang, Juan; Roorda, Austin; Smith, Earl L.
2007-01-01
The purpose of this study was to investigate the relationship between refractive errors and high-order aberrations in infant rhesus monkeys. Specifically, we compared the monochromatic wave aberrations measured with a Shack-Hartman wavefront sensor between normal monkeys and monkeys with vision-induced refractive errors. Shortly after birth, both normal monkeys and treated monkeys reared with optically induced defocus or form deprivation showed a decrease in the magnitude of high-order aberrations with age. However, the decrease in aberrations was typically smaller in the treated animals. Thus, at the end of the lens-rearing period, higher than normal amounts of aberrations were observed in treated eyes, both hyperopic and myopic eyes and treated eyes that developed astigmatism, but not spherical ametropias. The total RMS wavefront error increased with the degree of spherical refractive error, but was not correlated with the degree of astigmatism. Both myopic and hyperopic treated eyes showed elevated amounts of coma and trefoil and the degree of trefoil increased with the degree of spherical ametropia. Myopic eyes also exhibited a much higher prevalence of positive spherical aberration than normal or treated hyperopic eyes. Following the onset of unrestricted vision, the amount of high-order aberrations decreased in the treated monkeys that also recovered from the experimentally induced refractive errors. Our results demonstrate that high-order aberrations are influenced by visual experience in young primates and that the increase in high-order aberrations in our treated monkeys appears to be an optical byproduct of the vision-induced alterations in ocular growth that underlie changes in refractive error. The results from our study suggest that the higher amounts of wave aberrations observed in ametropic humans are likely to be a consequence, rather than a cause, of abnormal refractive development. PMID:17825347
Xue, Min; Pan, Shilong; Zhao, Yongjiu
2015-02-15
A novel optical vector network analyzer (OVNA) based on optical single-sideband (OSSB) modulation and balanced photodetection is proposed and experimentally demonstrated, which can eliminate the measurement error induced by the high-order sidebands in the OSSB signal. According to the analytical model of the conventional OSSB-based OVNA, if the optical carrier in the OSSB signal is fully suppressed, the measurement result is exactly the high-order-sideband-induced measurement error. By splitting the OSSB signal after the optical device-under-test (ODUT) into two paths, removing the optical carrier in one path, and then detecting the two signals in the two paths using a balanced photodetector (BPD), high-order-sideband-induced measurement error can be ideally eliminated. As a result, accurate responses of the ODUT can be achieved without complex post-signal processing. A proof-of-concept experiment is carried out. The magnitude and phase responses of a fiber Bragg grating (FBG) measured by the proposed OVNA with different modulation indices are superimposed, showing that the high-order-sideband-induced measurement error is effectively removed.
Moran, Lauren V; Stoeckel, Luke E; Wang, Kristina; Caine, Carolyn E; Villafuerte, Rosemond; Calderon, Vanessa; Baker, Justin T; Ongur, Dost; Janes, Amy C; Evins, A Eden; Pizzagalli, Diego A
2018-03-01
Nicotine improves attention and processing speed in individuals with schizophrenia. Few studies have investigated the effects of nicotine on cognitive control. Prior functional magnetic resonance imaging (fMRI) research demonstrates blunted activation of dorsal anterior cingulate cortex (dACC) and rostral anterior cingulate cortex (rACC) in response to error and decreased post-error slowing in schizophrenia. Participants with schizophrenia (n = 13) and healthy controls (n = 12) participated in a randomized, placebo-controlled, crossover study of the effects of transdermal nicotine on cognitive control. For each drug condition, participants underwent fMRI while performing the stop signal task where participants attempt to inhibit prepotent responses to "go (motor activation)" signals when an occasional "stop (motor inhibition)" signal appears. Error processing was evaluated by comparing "stop error" trials (failed response inhibition) to "go" trials. Resting-state fMRI data were collected prior to the task. Participants with schizophrenia had increased nicotine-induced activation of right caudate in response to errors compared to controls (DRUG × GROUP effect: p corrected < 0.05). Both groups had significant nicotine-induced activation of dACC and rACC in response to errors. Using right caudate activation to errors as a seed for resting-state functional connectivity analysis, relative to controls, participants with schizophrenia had significantly decreased connectivity between the right caudate and dACC/bilateral dorsolateral prefrontal cortices. In sum, we replicated prior findings of decreased post-error slowing in schizophrenia and found that nicotine was associated with more adaptive (i.e., increased) post-error reaction time (RT). This proof-of-concept pilot study suggests a role for nicotinic agents in targeting cognitive control deficits in schizophrenia.
IMRT QA: Selecting gamma criteria based on error detection sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steers, Jennifer M.; Fraass, Benedick A., E-mail: benedick.fraass@cshs.org
Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique,more » and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. Conclusions: We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.
Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at differentmore » developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner. • The nicotine-induced secondary motoneuron axonal pathfinding errors can occur independent of any muscle fiber alterations. • Nicotine exposure primarily affects dorsal projecting secondary motoneurons axons. • Nicotine-induced primary motoneuron axon pathfinding errors can influence secondary motoneuron axon morphology.« less
Why do adult dogs (Canis familiaris) commit the A-not-B search error?
Sümegi, Zsófia; Kis, Anna; Miklósi, Ádám; Topál, József
2014-02-01
It has been recently reported that adult domestic dogs, like human infants, tend to commit perseverative search errors; that is, they select the previously rewarded empty location in Piagetian A-not-B search task because of the experimenter's ostensive communicative cues. There is, however, an ongoing debate over whether these findings reveal that dogs can use the human ostensive referential communication as a source of information or the phenomenon can be accounted for by "more simple" explanations like insufficient attention and learning based on local enhancement. In 2 experiments the authors systematically manipulated the type of human cueing (communicative or noncommunicative) adjacent to the A hiding place during both the A and B trials. Results highlight 3 important aspects of the dogs' A-not-B error: (a) search errors are influenced to a certain extent by dogs' motivation to retrieve the toy object; (b) human communicative and noncommunicative signals have different error-inducing effects; and (3) communicative signals presented at the A hiding place during the B trials but not during the A trials play a crucial role in inducing the A-not-B error and it can be induced even without demonstrating repeated hiding events at location A. These findings further confirm the notion that perseverative search error, at least partially, reflects a "ready-to-obey" attitude in the dog rather than insufficient attention and/or working memory.
Errors induced by catalytic effects in premixed flame temperature measurements
NASA Astrophysics Data System (ADS)
Pita, G. P. A.; Nina, M. N. R.
The evaluation of instantaneous temperature in a premixed flame using fine-wire Pt/Pt-(13 pct)Rh thermocouples was found to be subject to significant errors due to catalytic effects. An experimental study was undertaken to assess the influence of local fuel/air ratio, thermocouple wire diameter, and gas velocity on the thermocouple reading errors induced by the catalytic surface reactions. Measurements made with both coated and uncoated thermocouples showed that the catalytic effect imposes severe limitations on the accuracy of mean and fluctuating gas temperature in the radical-rich flame zone.
Magnetic-field sensing with quantum error detection under the effect of energy relaxation
NASA Astrophysics Data System (ADS)
Matsuzaki, Yuichiro; Benjamin, Simon
2017-03-01
A solid state spin is an attractive system with which to realize an ultrasensitive magnetic field sensor. A spin superposition state will acquire a phase induced by the target field, and we can estimate the field strength from this phase. Recent studies have aimed at improving sensitivity through the use of quantum error correction (QEC) to detect and correct any bit-flip errors that may occur during the sensing period. Here we investigate the performance of a two-qubit sensor employing QEC and under the effect of energy relaxation. Surprisingly, we find that the standard QEC technique to detect and recover from an error does not improve the sensitivity compared with the single-qubit sensors. This is a consequence of the fact that the energy relaxation induces both a phase-flip and a bit-flip noise where the former noise cannot be distinguished from the relative phase induced from the target fields. However, we have found that we can improve the sensitivity if we adopt postselection to discard the state when error is detected. Even when quantum error detection is moderately noisy, and allowing for the cost of the postselection technique, we find that this two-qubit system shows an advantage in sensing over a single qubit in the same conditions.
Modeling Inborn Errors of Hepatic Metabolism Using Induced Pluripotent Stem Cells.
Pournasr, Behshad; Duncan, Stephen A
2017-11-01
Inborn errors of hepatic metabolism are because of deficiencies commonly within a single enzyme as a consequence of heritable mutations in the genome. Individually such diseases are rare, but collectively they are common. Advances in genome-wide association studies and DNA sequencing have helped researchers identify the underlying genetic basis of such diseases. Unfortunately, cellular and animal models that accurately recapitulate these inborn errors of hepatic metabolism in the laboratory have been lacking. Recently, investigators have exploited molecular techniques to generate induced pluripotent stem cells from patients' somatic cells. Induced pluripotent stem cells can differentiate into a wide variety of cell types, including hepatocytes, thereby offering an innovative approach to unravel the mechanisms underlying inborn errors of hepatic metabolism. Moreover, such cell models could potentially provide a platform for the discovery of therapeutics. In this mini-review, we present a brief overview of the state-of-the-art in using pluripotent stem cells for such studies. © 2017 American Heart Association, Inc.
Dynamically corrected gates for singlet-triplet spin qubits with control-dependent errors
NASA Astrophysics Data System (ADS)
Jacobson, N. Tobias; Witzel, Wayne M.; Nielsen, Erik; Carroll, Malcolm S.
2013-03-01
Magnetic field inhomogeneity due to random polarization of quasi-static local magnetic impurities is a major source of environmentally induced error for singlet-triplet double quantum dot (DQD) spin qubits. Moreover, for singlet-triplet qubits this error may depend on the applied controls. This effect is significant when a static magnetic field gradient is applied to enable full qubit control. Through a configuration interaction analysis, we observe that the dependence of the field inhomogeneity-induced error on the DQD bias voltage can vary systematically as a function of the controls for certain experimentally relevant operating regimes. To account for this effect, we have developed a straightforward prescription for adapting dynamically corrected gate sequences that assume control-independent errors into sequences that compensate for systematic control-dependent errors. We show that accounting for such errors may lead to a substantial increase in gate fidelities. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Gating of neural error signals during motor learning
Kimpo, Rhea R; Rinaldi, Jacob M; Kim, Christina K; Payne, Hannah L; Raymond, Jennifer L
2014-01-01
Cerebellar climbing fiber activity encodes performance errors during many motor learning tasks, but the role of these error signals in learning has been controversial. We compared two motor learning paradigms that elicited equally robust putative error signals in the same climbing fibers: learned increases and decreases in the gain of the vestibulo-ocular reflex (VOR). During VOR-increase training, climbing fiber activity on one trial predicted changes in cerebellar output on the next trial, and optogenetic activation of climbing fibers to mimic their encoding of performance errors was sufficient to implant a motor memory. In contrast, during VOR-decrease training, there was no trial-by-trial correlation between climbing fiber activity and changes in cerebellar output, and climbing fiber activation did not induce VOR-decrease learning. Our data suggest that the ability of climbing fibers to induce plasticity can be dynamically gated in vivo, even under conditions where climbing fibers are robustly activated by performance errors. DOI: http://dx.doi.org/10.7554/eLife.02076.001 PMID:24755290
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Alpha particle-induced soft errors in microelectronic devices. I
NASA Astrophysics Data System (ADS)
Redman, D. J.; Sega, R. M.; Joseph, R.
1980-03-01
The article provides a tutorial review and trend assessment of the problem of alpha particle-induced soft errors in VLSI memories. Attention is given to an analysis of the design evolution of modern ICs, and the characteristics of alpha particles and their origin in IC packaging are reviewed. Finally, the process of an alpha particle penetrating an IC is examined.
DiGirolamo, Gregory J; Smelson, David; Guevremont, Nathan
2015-08-01
Cue-induced craving is a clinically important aspect of cocaine addiction influencing ongoing use and sobriety. However, little is known about the relationship between cue-induced craving and cognitive control toward cocaine cues. While studies suggest that cocaine users have an attentional bias toward cocaine cues, the present study extends this research by testing if cocaine use disorder patients (CDPs) can control their eye movements toward cocaine cues and whether their response varied by cue-induced craving intensity. Thirty CDPs underwent a cue exposure procedure to dichotomize them into high and low craving groups followed by a modified antisaccade task in which subjects were asked to control their eye movements toward either a cocaine or neutral drug cue by looking away from the suddenly presented cue. The relationship between breakdowns in cognitive control (as measured by eye errors) and cue-induced craving (changes in self-reported craving following cocaine cue exposure) was investigated. CDPs overall made significantly more errors toward cocaine cues compared to neutral cues, with higher cravers making significantly more errors than lower cravers even though they did not differ significantly in addiction severity, impulsivity, anxiety, or depression levels. Cue-induced craving was the only specific and significant predictor of subsequent errors toward cocaine cues. Cue-induced craving directly and specifically relates to breakdowns of cognitive control toward cocaine cues in CDPs, with higher cravers being more susceptible. Hence, it may be useful identifying high cravers and target treatment toward curbing craving to decrease the likelihood of a subsequent breakdown in control. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multi-bits error detection and fast recovery in RISC cores
NASA Astrophysics Data System (ADS)
Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu
2015-11-01
The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.
Avoidance of APOBEC3B-induced mutation by error-free lesion bypass
Hoopes, James I.; Hughes, Amber L.; Hobson, Lauren A.; Cortez, Luis M.; Brown, Alexander J.
2017-01-01
Abstract APOBEC cytidine deaminases mutate cancer genomes by converting cytidines into uridines within ssDNA during replication. Although uracil DNA glycosylases limit APOBEC-induced mutation, it is unknown if subsequent base excision repair (BER) steps function on replication-associated ssDNA. Hence, we measured APOBEC3B-induced CAN1 mutation frequencies in yeast deficient in BER endonucleases or DNA damage tolerance proteins. Strains lacking Apn1, Apn2, Ntg1, Ntg2 or Rev3 displayed wild-type frequencies of APOBEC3B-induced canavanine resistance (CanR). However, strains without error-free lesion bypass proteins Ubc13, Mms2 and Mph1 displayed respective 4.9-, 2.8- and 7.8-fold higher frequency of APOBEC3B-induced CanR. These results indicate that mutations resulting from APOBEC activity are avoided by deoxyuridine conversion to abasic sites ahead of nascent lagging strand DNA synthesis and subsequent bypass by error-free template switching. We found this mechanism also functions during telomere re-synthesis, but with a diminished requirement for Ubc13. Interestingly, reduction of G to C substitutions in Ubc13-deficient strains uncovered a previously unknown role of Ubc13 in controlling the activity of the translesion synthesis polymerase, Rev1. Our results highlight a novel mechanism for error-free bypass of deoxyuridines generated within ssDNA and suggest that the APOBEC mutation signature observed in cancer genomes may under-represent the genomic damage these enzymes induce. PMID:28334887
Errors in Aviation Decision Making: Bad Decisions or Bad Luck?
NASA Technical Reports Server (NTRS)
Orasanu, Judith; Martin, Lynne; Davison, Jeannie; Null, Cynthia H. (Technical Monitor)
1998-01-01
Despite efforts to design systems and procedures to support 'correct' and safe operations in aviation, errors in human judgment still occur and contribute to accidents. In this paper we examine how an NDM (naturalistic decision making) approach might help us to understand the role of decision processes in negative outcomes. Our strategy was to examine a collection of identified decision errors through the lens of an aviation decision process model and to search for common patterns. The second, and more difficult, task was to determine what might account for those patterns. The corpus we analyzed consisted of tactical decision errors identified by the NTSB (National Transportation Safety Board) from a set of accidents in which crew behavior contributed to the accident. A common pattern emerged: about three quarters of the errors represented plan-continuation errors, that is, a decision to continue with the original plan despite cues that suggested changing the course of action. Features in the context that might contribute to these errors were identified: (a) ambiguous dynamic conditions and (b) organizational and socially-induced goal conflicts. We hypothesize that 'errors' are mediated by underestimation of risk and failure to analyze the potential consequences of continuing with the initial plan. Stressors may further contribute to these effects. Suggestions for improving performance in these error-inducing contexts are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Charlene; Wiseman, Howard; Jacobs, Kurt
2004-08-01
It was shown by Ahn, Wiseman, and Milburn [Phys. Rev. A 67, 052310 (2003)] that feedback control could be used as a quantum error correction process for errors induced by weak continuous measurement, given one perfectly measured error channel per qubit. Here we point out that this method can be easily extended to an arbitrary number of error channels per qubit. We show that the feedback protocols generated by our method encode n-2 logical qubits in n physical qubits, thus requiring just one more physical qubit than in the previous case.
Observer detection of image degradation caused by irreversible data compression processes
NASA Astrophysics Data System (ADS)
Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David
1991-05-01
Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.
McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.
2010-01-01
The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.
Borycki, E M; Kushniruk, A W; Bellwood, P; Brender, J
2012-01-01
The objective of this paper is to examine the extent, range and scope to which frameworks, models and theories dealing with technology-induced error have arisen in the biomedical and life sciences literature as indexed by Medline®. To better understand the state of work in the area of technology-induced error involving frameworks, models and theories, the authors conducted a search of Medline® using selected key words identified from seminal articles in this research area. Articles were reviewed and those pertaining to frameworks, models or theories dealing with technology-induced error were further reviewed by two researchers. All articles from Medline® from its inception to April of 2011 were searched using the above outlined strategy. 239 citations were returned. Each of the abstracts for the 239 citations were reviewed by two researchers. Eleven articles met the criteria based on abstract review. These 11 articles were downloaded for further in-depth review. The majority of the articles obtained describe frameworks and models with reference to theories developed in other literatures outside of healthcare. The papers were grouped into several areas. It was found that articles drew mainly from three literatures: 1) the human factors literature (including human-computer interaction and cognition), 2) the organizational behavior/sociotechnical literature, and 3) the software engineering literature. A variety of frameworks and models were found in the biomedical and life sciences literatures. These frameworks and models drew upon and extended frameworks, models and theoretical perspectives that have emerged in other literatures. These frameworks and models are informing an emerging line of research in health and biomedical informatics involving technology-induced errors in healthcare.
Prediction Accuracy of Error Rates for MPTB Space Experiment
NASA Technical Reports Server (NTRS)
Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.
1998-01-01
This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.
Borycki, Elizabeth; Kushniruk, Andre; Carvalho, Christopher
2013-01-01
Internationally, health information systems (HIS) safety has emerged as a significant concern for governments. Recently, research has emerged that has documented the ability of HIS to be implicated in the harm and death of patients. Researchers have attempted to develop methods that can be used to prevent or reduce technology-induced errors. Some researchers are developing methods that can be employed prior to systems release. These methods include the development of safety heuristics and clinical simulations. In this paper, we outline our methodology for developing safety heuristics specific to identifying the features or functions of a HIS user interface design that may lead to technology-induced errors. We follow this with a description of a methodological approach to validate these heuristics using clinical simulations. PMID:23606902
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Choi, G.; Iyer, R. K.
1990-01-01
A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.
Verhaart, René F; Fortunati, Valerio; Verduijn, Gerda M; van Walsum, Theo; Veenland, Jifke F; Paulides, Margarethus M
2014-04-01
Clinical trials have shown that hyperthermia, as adjuvant to radiotherapy and/or chemotherapy, improves treatment of patients with locally advanced or recurrent head and neck (H&N) carcinoma. Hyperthermia treatment planning (HTP) guided H&N hyperthermia is being investigated, which requires patient specific 3D patient models derived from Computed Tomography (CT)-images. To decide whether a recently developed automatic-segmentation algorithm can be introduced in the clinic, we compared the impact of manual- and automatic normal-tissue-segmentation variations on HTP quality. CT images of seven patients were segmented automatically and manually by four observers, to study inter-observer and intra-observer geometrical variation. To determine the impact of this variation on HTP quality, HTP was performed using the automatic and manual segmentation of each observer, for each patient. This impact was compared to other sources of patient model uncertainties, i.e. varying gridsizes and dielectric tissue properties. Despite geometrical variations, manual and automatic generated 3D patient models resulted in an equal, i.e. 1%, variation in HTP quality. This variation was minor with respect to the total of other sources of patient model uncertainties, i.e. 11.7%. Automatically generated 3D patient models can be introduced in the clinic for H&N HTP. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Intrinsic errors in transporting a single-spin qubit through a double quantum dot
NASA Astrophysics Data System (ADS)
Li, Xiao; Barnes, Edwin; Kestner, J. P.; Das Sarma, S.
2017-07-01
Coherent spatial transport or shuttling of a single electron spin through semiconductor nanostructures is an important ingredient in many spintronic and quantum computing applications. In this work we analyze the possible errors in solid-state quantum computation due to leakage in transporting a single-spin qubit through a semiconductor double quantum dot. In particular, we consider three possible sources of leakage errors associated with such transport: finite ramping times, spin-dependent tunneling rates between quantum dots induced by finite spin-orbit couplings, and the presence of multiple valley states. In each case we present quantitative estimates of the leakage errors, and discuss how they can be minimized. The emphasis of this work is on how to deal with the errors intrinsic to the ideal semiconductor structure, such as leakage due to spin-orbit couplings, rather than on errors due to defects or noise sources. In particular, we show that in order to minimize leakage errors induced by spin-dependent tunnelings, it is necessary to apply pulses to perform certain carefully designed spin rotations. We further develop a formalism that allows one to systematically derive constraints on the pulse shapes and present a few examples to highlight the advantage of such an approach.
Absence of Mutagenic Activity of Hycanthone in Serratia marcescens,
1986-05-29
repair system but is enhanced by the plasmid pKMl01, which mediates the inducible error-prone repair system. Hycanthone, like proflavin , .1...enhanced by the plasmid pKM10, which mediates the inducible error-prone repair system. Hycanthone, like proflavin , intercalates between the stacked bases...Roth (1974) lave suggested that proflavin , which has a planar triple ring structure similar to hycanthone, interacts with DNA, which upon replication
A Unified Approach to Measurement Error and Missing Data: Overview and Applications
ERIC Educational Resources Information Center
Blackwell, Matthew; Honaker, James; King, Gary
2017-01-01
Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model…
Results from a Sting Whip Correction Verification Test at the Langley 16-Foot Transonic Tunnel
NASA Technical Reports Server (NTRS)
Crawford, B. L.; Finley, T. D.
2002-01-01
In recent years, great strides have been made toward correcting the largest error in inertial Angle of Attack (AoA) measurements in wind tunnel models. This error source is commonly referred to as 'sting whip' and is caused by aerodynamically induced forces imparting dynamics on sting-mounted models. These aerodynamic forces cause the model to whip through an arc section in the pitch and/or yaw planes, thus generating a centrifugal acceleration and creating a bias error in the AoA measurement. It has been shown that, under certain conditions, this induced AoA error can be greater than one third of a degree. An error of this magnitude far exceeds the target AoA goal of 0.01 deg established at NASA Langley Research Center (LaRC) and elsewhere. New sting whip correction techniques being developed at LaRC are able to measure and reduce this sting whip error by an order of magnitude. With this increase of accuracy, the 0.01 deg AoA target is achievable under all but the most severe conditions.
Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing
Lefebvre, Germain; Blakemore, Sarah-Jayne
2017-01-01
Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice. PMID:28800597
Gordon, H R; Wang, M
1992-07-20
In the algorithm for the atmospheric correction of coastal zone color scanner (CZCS) imagery, it is assumed that the sea surface is flat. Simulations are carried out to assess the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct Sun glitter (either a large solar zenith angle or the sensor tilted away from the specular image of the Sun), the following conclusions appear justified: (1) the error induced by ignoring the surface roughness is less, similar1 CZCS digital count for wind speeds up to approximately 17 m/s, and therefore can be ignored for this sensor; (2) the roughness-induced error is much more strongly dependent on the wind speed than on the wave shadowing, suggesting that surface effects can be adequately dealt with without precise knowledge of the shadowing; and (3) the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness, suggesting that in refining algorithms for future sensors more effort should be placed on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing.
Palminteri, Stefano; Lefebvre, Germain; Kilford, Emma J; Blakemore, Sarah-Jayne
2017-08-01
Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice.
Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro
2011-01-01
During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649
The effect of the dynamic wet troposphere on VLBI measurements
NASA Technical Reports Server (NTRS)
Treuhaft, R. N.; Lanyi, G. E.
1986-01-01
Calculations using a statistical model of water vapor fluctuations yield the effect of the dynamic wet troposphere on Very Long Baseline Interferometry (VLBI) measurements. The statistical model arises from two primary assumptions: (1) the spatial structure of refractivity fluctuations can be closely approximated by elementary (Kolmogorov) turbulence theory, and (2) temporal fluctuations are caused by spatial patterns which are moved over a site by the wind. The consequences of these assumptions are outlined for the VLBI delay and delay rate observables. For example, wet troposphere induced rms delays for Deep Space Network (DSN) VLBI at 20-deg elevation are about 3 cm of delay per observation, which is smaller, on the average, than other known error sources in the current DSN VLBI data set. At 20-deg elevation for 200-s time intervals, water vapor induces approximately 1.5 x 10 to the minus 13th power s/s in the Allan standard deviation of interferometric delay, which is a measure of the delay rate observable error. In contrast to the delay error, the delay rate measurement error is dominated by water vapor fluctuations. Water vapor induced VLBI parameter errors and correlations are calculated. For the DSN, baseline length parameter errors due to water vapor fluctuations are in the range of 3 to 5 cm. The above physical assumptions also lead to a method for including the water vapor fluctuations in the parameter estimation procedure, which is used to extract baseline and source information from the VLBI observables.
Lee, Norman; Ward, Jessica L; Vélez, Alejandro; Micheyl, Christophe; Bee, Mark A
2017-03-06
Noise is a ubiquitous source of errors in all forms of communication [1]. Noise-induced errors in speech communication, for example, make it difficult for humans to converse in noisy social settings, a challenge aptly named the "cocktail party problem" [2]. Many nonhuman animals also communicate acoustically in noisy social groups and thus face biologically analogous problems [3]. However, we know little about how the perceptual systems of receivers are evolutionarily adapted to avoid the costs of noise-induced errors in communication. In this study of Cope's gray treefrog (Hyla chrysoscelis; Hylidae), we investigated whether receivers exploit a potential statistical regularity present in noisy acoustic scenes to reduce errors in signal recognition and discrimination. We developed an anatomical/physiological model of the peripheral auditory system to show that temporal correlation in amplitude fluctuations across the frequency spectrum ("comodulation") [4-6] is a feature of the noise generated by large breeding choruses of sexually advertising males. In four psychophysical experiments, we investigated whether females exploit comodulation in background noise to mitigate noise-induced errors in evolutionarily critical mate-choice decisions. Subjects experienced fewer errors in recognizing conspecific calls and in selecting the calls of high-quality mates in the presence of simulated chorus noise that was comodulated. These data show unequivocally, and for the first time, that exploiting statistical regularities present in noisy acoustic scenes is an important biological strategy for solving cocktail-party-like problems in nonhuman animal communication. Copyright © 2017 Elsevier Ltd. All rights reserved.
Basic Studies on High Pressure Air Plasmas
2006-08-30
which must be added a 1.5 month salary to A. Bugayev for assistance in laser and optic techniques. 2 Part II Technical report Plasma-induced phase shift...two-wavelength heterodyne interferometry applied to atmospheric pressure air plasma 11.1 .A. Plasma-induced phase shift - Electron density...a driver, since the error on the frequency leads to an error on the phase shift. (c) Optical elements Mirrors Protected mirrors must be used to stand
NASA Technical Reports Server (NTRS)
Belcastro, C. M.
1984-01-01
Advanced composite aircraft designs include fault-tolerant computer-based digital control systems with thigh reliability requirements for adverse as well as optimum operating environments. Since aircraft penetrate intense electromagnetic fields during thunderstorms, onboard computer systems maya be subjected to field-induced transient voltages and currents resulting in functional error modes which are collectively referred to as digital system upset. A methodology was developed for assessing the upset susceptibility of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general-purpose microprocessor were studied via tests which involved the random input of analog transients which model lightning-induced signals onto interface lines of an 8080-based microcomputer from which upset error data were recorded. The application of Markov modeling to upset susceptibility estimation is discussed and a stochastic model development.
Brain signaling and behavioral responses induced by exposure to (56)Fe-particle radiation
NASA Technical Reports Server (NTRS)
Denisova, N. A.; Shukitt-Hale, B.; Rabin, B. M.; Joseph, J. A.
2002-01-01
Previous experiments have demonstrated that exposure to 56Fe-particle irradiation (1.5 Gy, 1 GeV) produced aging-like accelerations in neuronal and behavioral deficits. Astronauts on long-term space flights will be exposed to similar heavy-particle radiations that might have similar deleterious effects on neuronal signaling and cognitive behavior. Therefore, the present study evaluated whether radiation-induced spatial learning and memory behavioral deficits are associated with region-specific brain signaling deficits by measuring signaling molecules previously found to be essential for behavior [pre-synaptic vesicle proteins, synaptobrevin and synaptophysin, and protein kinases, calcium-dependent PRKCs (also known as PKCs) and PRKA (PRKA RIIbeta)]. The results demonstrated a significant radiation-induced increase in reference memory errors. The increases in reference memory errors were significantly negatively correlated with striatal synaptobrevin and frontal cortical synaptophysin expression. Both synaptophysin and synaptobrevin are synaptic vesicle proteins that are important in cognition. Striatal PRKA, a memory signaling molecule, was also significantly negatively correlated with reference memory errors. Overall, our findings suggest that radiation-induced pre-synaptic facilitation may contribute to some previously reported radiation-induced decrease in striatal dopamine release and for the disruption of the central dopaminergic system integrity and dopamine-mediated behavior.
Brain signaling and behavioral responses induced by exposure to (56)Fe-particle radiation.
Denisova, N A; Shukitt-Hale, B; Rabin, B M; Joseph, J A
2002-12-01
Previous experiments have demonstrated that exposure to 56Fe-particle irradiation (1.5 Gy, 1 GeV) produced aging-like accelerations in neuronal and behavioral deficits. Astronauts on long-term space flights will be exposed to similar heavy-particle radiations that might have similar deleterious effects on neuronal signaling and cognitive behavior. Therefore, the present study evaluated whether radiation-induced spatial learning and memory behavioral deficits are associated with region-specific brain signaling deficits by measuring signaling molecules previously found to be essential for behavior [pre-synaptic vesicle proteins, synaptobrevin and synaptophysin, and protein kinases, calcium-dependent PRKCs (also known as PKCs) and PRKA (PRKA RIIbeta)]. The results demonstrated a significant radiation-induced increase in reference memory errors. The increases in reference memory errors were significantly negatively correlated with striatal synaptobrevin and frontal cortical synaptophysin expression. Both synaptophysin and synaptobrevin are synaptic vesicle proteins that are important in cognition. Striatal PRKA, a memory signaling molecule, was also significantly negatively correlated with reference memory errors. Overall, our findings suggest that radiation-induced pre-synaptic facilitation may contribute to some previously reported radiation-induced decrease in striatal dopamine release and for the disruption of the central dopaminergic system integrity and dopamine-mediated behavior.
Li, Wenxun; Matin, Leonard
2005-03-01
Measurements were made of the accuracy of open-loop manual pointing and height-matching to a visual target whose elevation was perceptually mislocalized. Accuracy increased linearly with distance of the hand from the body, approaching complete accuracy at full extension; with the hand close to the body (within the midfrontal plane), the manual errors equaled the magnitude of the perceptual mislocalization. The visual inducing stimulus responsible for the perceptual errors was a single pitched-from-vertical line that was long (50 degrees), eccentrically-located (25 degrees horizontal), and viewed in otherwise total darkness. The line induced perceptual errors in the elevation of a small, circular visual target set to appear at eye level (VPEL), a setting that changed linearly with the change in the line's visual pitch as has been previously reported (pitch: -30 degrees topbackward to 30 degrees topforward); the elevation errors measured by VPEL settings varied systematically with pitch through an 18 degrees range. In a fourth experiment the visual inducing stimulus responsible for the perceptual errors was shown to induce separately-measured errors in the manual setting of the arm to feel horizontal that were also distance-dependent. The distance-dependence of the visually-induced changes in felt arm position accounts quantitatively for the distance-dependence of the manual errors in pointing/reaching and height matching to the visual target: The near equality of the changes in felt horizontal and changes in pointing/reaching with the finger at the end of the fully extended arm is responsible for the manual accuracy of the fully-extended point; with the finger in the midfrontal plane their large difference is responsible for the inaccuracies of the midfrontal-plane point. The results are inconsistent with the widely-held but controversial theory that visual spatial information employed for perception and action are dissociated and different with no illusory visual influence on action. A different two-system theory, the Proximal/Distal model, employing the same signals from vision and from the body-referenced mechanism with different weights for different hand-to-body distances, accounts for both the perceptual and the manual results in the present experiments.
Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.
1995-01-01
This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.
Decay of motor memories in the absence of error
Vaswani, Pavan A.; Shadmehr, Reza
2013-01-01
When motor commands are accompanied by an unexpected outcome, the resulting error induces changes in subsequent commands. However, when errors are artificially eliminated, changes in motor commands are not sustained, but show decay. Why does the adaptation-induced change in motor output decay in the absence of error? A prominent idea is that decay reflects the stability of the memory. We show results that challenge this idea and instead suggest that motor output decays because the brain actively disengages a component of the memory. Humans adapted their reaching movements to a perturbation and were then introduced to a long period of trials in which errors were absent (error-clamp). We found that, in some subjects, motor output did not decay at the onset of the error-clamp block, but a few trials later. We manipulated the kinematics of movements in the error-clamp block and found that as movements became more similar to subjects’ natural movements in the perturbation block, the lag to decay onset became longer and eventually reached hundreds of trials. Furthermore, when there was decay in the motor output, the endpoint of decay was not zero, but a fraction of the motor memory that was last acquired. Therefore, adaptation to a perturbation installed two distinct kinds of memories: one that was disengaged when the brain detected a change in the task, and one that persisted despite it. Motor memories showed little decay in the absence of error if the brain was prevented from detecting a change in task conditions. PMID:23637163
The Investigation of Pointing Behaviors in Web Browsing
2016-09-26
fast movements have a different error model from slow movements and study the impact induced by the open-loop nature of fast 1. REPORT DATE (DD-MM-YYYY...whether or not fast movements have a different error model from slow movements and study the impact induced by the open-loop nature of fast movements. (3...PI will comparison of Fitts’ law results for natural browsing using two different pointing devices: physical mouse and laptop touchpad in order to
Patterned wafer geometry grouping for improved overlay control
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Woo, Jaeson; Park, Junbeom; Song, Changrock; Anis, Fatima; Vukkadala, Pradeep; Jeon, Sanghuck; Choi, DongSub; Huang, Kevin; Heo, Hoyoung; Smith, Mark D.; Robinson, John C.
2017-03-01
Process-induced overlay errors from outside the litho cell have become a significant contributor to the overlay error budget including non-uniform wafer stress. Previous studies have shown the correlation between process-induced stress and overlay and the opportunity for improvement in process control, including the use of patterned wafer geometry (PWG) metrology to reduce stress-induced overlay signatures. Key challenges of volume semiconductor manufacturing are how to improve not only the magnitude of these signatures, but also the wafer to wafer variability. This work involves a novel technique of using PWG metrology to provide improved litho-control by wafer-level grouping based on incoming process induced overlay, relevant for both 3D NAND and DRAM. Examples shown in this study are from 19 nm DRAM manufacturing.
Effect of different head-neck-jaw postures on cervicocephalic kinesthetic sense.
Zafar, H; Alghadir, A H; Iqbal, Z A
2017-12-01
To investigate the effect of different induced head-neck-jaw postures on head-neck relocation error among healthy subjects. 30 healthy adult male subjects participated in this study. Cervicocephalic kinesthetic sense was measured while standing, habitual sitting, habitual sitting with clenched jaw and habitual sitting with forward head posture during right rotation, left rotation, flexion and extension using kinesthetic sensibility test. Head-neck relocation error was least while standing, followed by habitual sitting, habitual sitting with forward head posture and habitual sitting with jaw clenched. However, there was no significant difference in error between different tested postures during all the movements. To the best of our knowledge, this is the first study to see the effect of different induced head-neck-jaw postures on head-neck position sense among healthy subjects. Assuming a posture for a short duration of time doesn't affect head-neck relocation error in normal healthy subjects.
Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry
NASA Astrophysics Data System (ADS)
Feng, Shijie; Zuo, Chao; Tao, Tianyang; Hu, Yan; Zhang, Minliang; Chen, Qian; Gu, Guohua
2018-04-01
Phase-shifting profilometry (PSP) is a widely used approach to high-accuracy three-dimensional shape measurements. However, when it comes to moving objects, phase errors induced by the movement often result in severe artifacts even though a high-speed camera is in use. From our observations, there are three kinds of motion artifacts: motion ripples, motion-induced phase unwrapping errors, and motion outliers. We present a novel motion-compensated PSP to remove the artifacts for dynamic measurements of rigid objects. The phase error of motion ripples is analyzed for the N-step phase-shifting algorithm and is compensated using the statistical nature of the fringes. The phase unwrapping errors are corrected exploiting adjacent reliable pixels, and the outliers are removed by comparing the original phase map with a smoothed phase map. Compared with the three-step PSP, our method can improve the accuracy by more than 95% for objects in motion.
Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data
Zhao, Shanshan
2014-01-01
Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Mauder, M.; Huq, S.; De Roo, F.; Foken, T.; Manhart, M.; Schmid, H. P. E.
2017-12-01
The Campbell CSAT3 sonic anemometer is one of the most widely used instruments for eddy-covariance measurement. However, conflicting estimates for the probe-induced flow distortion error of this instrument have been reported recently, and those error estimates range between 3% and 14% for the measurement of vertical velocity fluctuations. This large discrepancy between the different studies can probably be attributed to the different experimental approaches applied. In order to overcome the limitations of both field intercomparison experiments and wind tunnel experiments, we propose a new approach that relies on virtual measurements in a large-eddy simulation (LES) environment. In our experimental set-up, we generate horizontal and vertical velocity fluctuations at frequencies that typically dominate the turbulence spectra of the surface layer. The probe-induced flow distortion error of a CSAT3 is then quantified by this numerical wind tunnel approach while the statistics of the prescribed inflow signal are taken as reference or etalon. The resulting relative error is found to range from 3% to 7% and from 1% to 3% for the standard deviation of the vertical and the horizontal velocity component, respectively, depending on the orientation of the CSAT3 in the flow field. We further demonstrate that these errors are independent of the frequency of fluctuations at the inflow of the simulation. The analytical corrections proposed by Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol, 155, 371-395, 2015) are compared against our simulated results, and we find that they indeed reduce the error by up to three percentage points. However, these corrections fail to reproduce the azimuth-dependence of the error that we observe. Moreover, we investigate the general Reynolds number dependence of the flow distortion error by more detailed idealized simulations.
Salient Distractors Can Induce Saccade Adaptation
Khan, Afsheen; McFadden, Sally A.; Wallman, Josh
2014-01-01
When saccadic eye movements consistently fail to land on their intended target, saccade accuracy is maintained by gradually adapting the movement size of successive saccades. The proposed error signal for saccade adaptation has been based on the distance between where the eye lands and the visual target (retinal error). We studied whether the error signal could alternatively be based on the distance between the predicted and actual locus of attention after the saccade. Unlike conventional adaptation experiments that surreptitiously displace the target once a saccade is initiated towards it, we instead attempted to draw attention away from the target by briefly presenting salient distractor images on one side of the target after the saccade. To test whether less salient, more predictable distractors would induce less adaptation, we separately used fixed random noise distractors. We found that both visual attention distractors were able to induce a small degree of downward saccade adaptation but significantly more to the more salient distractors. As in conventional adaptation experiments, upward adaptation was less effective and salient distractors did not significantly increase amplitudes. We conclude that the locus of attention after the saccade can act as an error signal for saccade adaptation. PMID:24876947
Chan, Tommy C Y; Cheng, George P M; Wang, Zheng; Tham, Clement C Y; Woo, Victor C P; Jhanji, Vishal
2015-08-01
To evaluate the outcomes of femtosecond-assisted arcuate keratotomy combined with cataract surgery in eyes with low to moderate corneal astigmatism. Retrospective, interventional case series. This study included patients who underwent combined femtosecond-assisted phacoemulsification and arcuate keratotomy between March 2013 and August 2013. Keratometric astigmatism was evaluated before and 2 months after the surgery. Vector analysis of the astigmatic changes was performed using the Alpins method. Overall, 54 eyes of 54 patients (18 male and 36 female; mean age, 68.8 ± 11.4 years) were included. The mean preoperative (target-induced astigmatism) and postoperative astigmatism was 1.33 ± 0.57 diopters (D) and 0.87 ± 0.56 D, respectively (P < .001). The magnitude of error (difference between surgically induced and target-induced astigmatism) (-0.13 ± 0.68 D), as well as the correction index (ratio of surgically induced and target-induced astigmatism) (0.86 ± 0.52), demonstrated slight undercorrection. The angle of error was very close to 0, indicating no significant systematic error of misaligned treatment. However, the absolute angle of error showed a less favorable range (17.5 ± 19.2 degrees), suggesting variable factors such as healing or alignment at an individual level. There were no intraoperative or postoperative complications. Combined phacoemulsification with arcuate keratotomy using femtosecond laser appears to be a relatively easy and safe means for management of low to moderate corneal astigmatism in cataract surgery candidates. Misalignment at an individual level can reduce its effectiveness. This issue remains to be elucidated in future studies. Copyright © 2015 Elsevier Inc. All rights reserved.
2010-08-31
not defined. Figure 5.9: Run 10-Schlieren image with only the laser-induced air-breakdown glow visible. (M=8.77, T∞=68.7 K , P∞=0.15 kPa...Run #13-Laser induced blast wave interaction with oblique shock. (M-5.95, T∞=263.7 K , P∞=5.62 kPa, Ep=196±20 J) ................ Error! Bookmark not...the air-breakdown geometry. (M-5.95, T∞=262.3 K , P∞=5.16 kPa, Ep=176±18 J)Error! Bookmark not defined. Figure 5.13: Run#16 - Laser induced blast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
2015-10-20
This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.
Error compensation for thermally induced errors on a machine tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krulewich, D.A.
1996-11-08
Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.
Zhang, Jiamei; Wang, Yan; Chen, Xiaoqin
2016-04-01
To evaluate and compare refractive outcomes of moderate- and high-astigmatism correction after wavefront-guided laser in situ keratomileusis (LASIK) and small-incision lenticule extraction (SMILE). This comparative study enrolled a total of 64 eyes that had undergone SMILE (42 eyes) and wavefront-guided LASIK (22 eyes). Preoperative cylindrical diopters were ≤-2.25 D in moderate- and >-2.25 D in high-astigmatism subgroups. The refractive results were analyzed based on the Alpins vector method that included target-induced astigmatism, surgically induced astigmatism, difference vector, correction index, index of success, magnitude of error, angle of error, and flattening index. All subjects completed the 3-month follow-up. No significant differences were found in the target-induced astigmatism, surgically induced astigmatism, and difference vector between SMILE and wavefront-guided LASIK. However, the average angle of error value was -1.00 ± 3.16 after wavefront-guided LASIK and 1.22 ± 3.85 after SMILE with statistical significance (P < 0.05). The absolute angle of error value was statistically correlated with difference vector and index of success after both procedures. In the moderate-astigmatism group, correction index was 1.04 ± 0.15 after wavefront-guided LASIK and 0.88 ± 0.15 after SMILE (P < 0.05). However, in the high-astigmatism group, correction index was 0.87 ± 0.13 after wavefront-guided LASIK and 0.88 ± 0.12 after SMILE (P = 0.889). Both procedures showed preferable outcomes in the correction of moderate and high astigmatism. However, high astigmatism was undercorrected after both procedures. Axial error of astigmatic correction may be one of the potential factors for the undercorrection.
The preclinical pharmacological profile of WAY-132983, a potent M1 preferring agonist.
Bartolomeo, A C; Morris, H; Buccafusco, J J; Kille, N; Rosenzweig-Lipson, S; Husbands, M G; Sabb, A L; Abou-Gharbia, M; Moyer, J A; Boast, C A
2000-02-01
Muscarinic M1 preferring agonists may improve cognitive deficits associated with Alzheimer's disease. Side effect assessment of the M1 preferring agonist WAY-132983 showed significant salivation (10 mg/kg i.p. or p.o.) and produced dose-dependent hypothermia after i. p. or p.o. administration. WAY-132983 significantly reduced scopolamine (0.3 mg/kg i.p.)-induced hyperswimming in mice. Cognitive assessment in rats used pretrained animals in a forced choice, 1-h delayed nonmatch-to-sample radial arm maze task. WAY-132983 (0.3 mg/kg i.p) significantly reduced scopolamine (0.3 mg/kg s.c.)-induced errors. Oral WAY-132983 attenuated scopolamine-induced errors; that is, errors produced after combining scopolamine and WAY-132983 (to 3 mg/kg p.o.) were not significantly increased compared with those of vehicle-treated control animals, whereas errors after scopolamine were significantly higher than those of control animals. With the use of miniosmotic pumps, 0.03 mg/kg/day (s.c.) WAY-132983 significantly reduced AF64A (3 nmol/3 microliter/lateral ventricle)-induced errors. Verification of AF64A cholinotoxicity showed significantly lower choline acetyltransferase activity in the hippocampi of AF64A-treated animals, with no significant changes in the striatal or frontal cortex. Cognitive assessment in primates involved the use of pretrained aged animals in a visual delayed match-to-sample procedure. Oral WAY-132983 significantly increased the number of correct responses during short and long delay interval testing. These effects were also apparent 24 h after administration. WAY-132983 exhibited cognitive benefit at doses lower than those producing undesirable effects; therefore, WAY-132983 is a potential candidate for improving the cognitive status of patients with Alzheimer's disease.
Analysis of the impact of error detection on computer performance
NASA Technical Reports Server (NTRS)
Shin, K. C.; Lee, Y. H.
1983-01-01
Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.
Optical communication system performance with tracking error induced signal fading.
NASA Technical Reports Server (NTRS)
Tycz, M.; Fitzmaurice, M. W.; Premo, D. A.
1973-01-01
System performance is determined for an optical communication system using noncoherent detection in the presence of tracking error induced signal fading assuming (1) binary on-off modulation (OOK) with both fixed and adaptive threshold receivers, and (2) binary polarization modulation (BPM). BPM is shown to maintain its inherent 2- to 3-dB advantage over OOK when adaptive thresholding is used, and to have a substantially greater advantage when the OOK system is restricted to a fixed decision threshold.
Time-dependent phase error correction using digital waveform synthesis
Doerry, Armin W.; Buskirk, Stephen
2017-10-10
The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.
Huang, Juan; Hung, Li-Fang; Smith, Earl L.
2012-01-01
This study aimed to investigate the changes in ocular shape and relative peripheral refraction during the recovery from myopia produced by form deprivation (FD) and hyperopic defocus. FD was imposed in 6 monkeys by securing a diffuser lens over one eye; hyperopic defocus was produced in another 6 monkeys by fitting one eye with -3D spectacle. When unrestricted vision was re-established, the treated eyes recovered from the vision-induced central and peripheral refractive errors. The recovery of peripheral refractive errors was associated with corresponding changes in the shape of the posterior globe. The results suggest that vision can actively regulate ocular shape and the development of central and peripheral refractions in infant primates. PMID:23026012
Error catastrophe and phase transition in the empirical fitness landscape of HIV
NASA Astrophysics Data System (ADS)
Hart, Gregory R.; Ferguson, Andrew L.
2015-03-01
We have translated clinical sequence databases of the p6 HIV protein into an empirical fitness landscape quantifying viral replicative capacity as a function of the amino acid sequence. We show that the viral population resides close to a phase transition in sequence space corresponding to an "error catastrophe" beyond which there is lethal accumulation of mutations. Our model predicts that the phase transition may be induced by drug therapies that elevate the mutation rate, or by forcing mutations at particular amino acids. Applying immune pressure to any combination of killer T-cell targets cannot induce the transition, providing a rationale for why the viral protein can exist close to the error catastrophe without sustaining fatal fitness penalties due to adaptive immunity.
NASA Technical Reports Server (NTRS)
Marshall, Paul; Carts, Marty; Campbell, Art; Reed, Robert; Ladbury, Ray; Seidleck, Christina; Currie, Steve; Riggs, Pam; Fritz, Karl; Randall, Barb
2004-01-01
A viewgraph presentation that reviews recent SiGe bit error test data for different commercially available high speed SiGe BiCMOS chips that were subjected to various levels of heavy ion and proton radiation. Results for the tested chips at different operating speeds are displayed in line graphs.
Saito, Masahide; Sano, Naoki; Shibata, Yuki; Kuriyama, Kengo; Komiyama, Takafumi; Marino, Kan; Aoki, Shinichi; Ashizawa, Kazunari; Yoshizawa, Kazuya; Onishi, Hiroshi
2018-05-01
The purpose of this study was to compare the MLC error sensitivity of various measurement devices for VMAT pre-treatment quality assurance (QA). This study used four QA devices (Scandidos Delta4, PTW 2D-array, iRT systems IQM, and PTW Farmer chamber). Nine retrospective VMAT plans were used and nine MLC error plans were generated for all nine original VMAT plans. The IQM and Farmer chamber were evaluated using the cumulative signal difference between the baseline and error-induced measurements. In addition, to investigate the sensitivity of the Delta4 device and the 2D-array, global gamma analysis (1%/1, 2%/2, and 3%/3 mm), dose difference (1%, 2%, and 3%) were used between the baseline and error-induced measurements. Some deviations of the MLC error sensitivity for the evaluation metrics and MLC error ranges were observed. For the two ionization devices, the sensitivity of the IQM was significantly better than that of the Farmer chamber (P < 0.01) while both devices had good linearly correlation between the cumulative signal difference and the magnitude of MLC errors. The pass rates decreased as the magnitude of the MLC error increased for both Delta4 and 2D-array. However, the small MLC error for small aperture sizes, such as for lung SBRT, could not be detected using the loosest gamma criteria (3%/3 mm). Our results indicate that DD could be more useful than gamma analysis for daily MLC QA, and that a large-area ionization chamber has a greater advantage for detecting systematic MLC error because of the large sensitive volume, while the other devices could not detect this error for some cases with a small range of MLC error. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Tasca, D. M.
1981-01-01
Single event upset phenomena are discussed, taking into account cosmic ray induced errors in IIL microprocessors and logic devices, single event upsets in NMOS microprocessors, a prediction model for bipolar RAMs in a high energy ion/proton environment, the search for neutron-induced hard errors in VLSI structures, soft errors due to protons in the radiation belt, and the use of an ion microbeam to study single event upsets in microcircuits. Basic mechanisms in materials and devices are examined, giving attention to gamma induced noise in CCD's, the annealing of MOS capacitors, an analysis of photobleaching techniques for the radiation hardening of fiber optic data links, a hardened field insulator, the simulation of radiation damage in solids, and the manufacturing of radiation resistant optical fibers. Energy deposition and dosimetry is considered along with SGEMP/IEMP, radiation effects in devices, space radiation effects and spacecraft charging, EMP/SREMP, and aspects of fabrication, testing, and hardness assurance.
Model parameter-related optimal perturbations and their contributions to El Niño prediction errors
NASA Astrophysics Data System (ADS)
Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua
2018-04-01
Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.
Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.
Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M
2003-05-13
Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.
PCNA mono-ubiquitination and activation of translesion DNA polymerases by DNA polymerase {alpha}.
Suzuki, Motoshi; Niimi, Atsuko; Limsirichaikul, Siripan; Tomida, Shuta; Miao Huang, Qin; Izuta, Shunji; Usukura, Jiro; Itoh, Yasutomo; Hishida, Takashi; Akashi, Tomohiro; Nakagawa, Yoshiyuki; Kikuchi, Akihiko; Pavlov, Youri; Murate, Takashi; Takahashi, Takashi
2009-07-01
Translesion DNA synthesis (TLS) involves PCNA mono-ubiquitination and TLS DNA polymerases (pols). Recent evidence has shown that the mono-ubiquitination is induced not only by DNA damage but also by other factors that induce stalling of the DNA replication fork. We studied the effect of spontaneous DNA replication errors on PCNA mono-ubiquitination and TLS induction. In the pol1L868F strain, which expressed an error-prone pol alpha, PCNA was spontaneously mono-ubiquitinated. Pol alpha L868F had a rate-limiting step at the extension from mismatched primer termini. Electron microscopic observation showed the accumulation of a single-stranded region at the DNA replication fork in yeast cells. For pol alpha errors, pol zeta participated in a generation of +1 frameshifts. Furthermore, in the pol1L868F strain, UV-induced mutations were lower than in the wild-type and a pol delta mutant strain (pol3-5DV), and deletion of the RAD30 gene (pol eta) suppressed this defect. These data suggest that nucleotide misincorporation by pol alpha induces exposure of single-stranded DNA, PCNA mono-ubiquitination and activates TLS pols.
Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.
2015-01-01
Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMN). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30µM). Previous work showed that the paralytic mutant zebrafish known as sofa potato, exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. PMID:25668718
NASA Astrophysics Data System (ADS)
Rojo, Pilar; Royo, Santiago; Caum, Jesus; Ramírez, Jorge; Madariaga, Ines
2015-02-01
Peripheral refraction, the refractive error present outside the main direction of gaze, has lately attracted interest due to its alleged relationship with the progression of myopia. The ray tracing procedures involved in its calculation need to follow an approach different from those used in conventional ophthalmic lens design, where refractive errors are compensated only in the main direction of gaze. We present a methodology for the evaluation of the peripheral refractive error in ophthalmic lenses, adapting the conventional generalized ray tracing approach to the requirements of the evaluation of peripheral refraction. The nodal point of the eye and a retinal conjugate surface will be used to evaluate the three-dimensional distribution of refractive error around the fovea. The proposed approach enables us to calculate the three-dimensional peripheral refraction induced by any ophthalmic lens at any direction of gaze and to personalize the lens design to the requirements of the user. The complete evaluation process for a given user prescribed with a -5.76D ophthalmic lens for foveal vision is detailed, and comparative results obtained when the geometry of the lens is modified and when the central refractive error is over- or undercorrected. The methodology is also applied for an emmetropic eye to show its application for refractive errors other than myopia.
Hoogeveen, Suzanne; Schjoedt, Uffe; van Elk, Michiel
2018-06-19
This study examines the effects of expected transcranial stimulation on the error(-related) negativity (Ne or ERN) and the sense of agency in participants who perform a cognitive control task. Placebo transcranial direct current stimulation was used to elicit expectations of transcranially induced cognitive improvement or impairment. The improvement/impairment manipulation affected both the Ne/ERN and the sense of agency (i.e., whether participants attributed errors to oneself or the brain stimulation device): Expected improvement increased the ERN in response to errors compared with both impairment and control conditions. Expected impairment made participants falsely attribute errors to the transcranial stimulation. This decrease in sense of agency was correlated with a reduced ERN amplitude. These results show that expectations about transcranial stimulation impact users' neural response to self-generated errors and the attribution of responsibility-especially when actions lead to negative outcomes. We discuss our findings in relation to predictive processing theory according to which the effect of prior expectations on the ERN reflects the brain's attempt to generate predictive models of incoming information. By demonstrating that induced expectations about transcranial stimulation can have effects at a neural level, that is, beyond mere demand characteristics, our findings highlight the potential for placebo brain stimulation as a promising tool for research.
Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D
Haye, R. J. La; Paz-Soldan, C.; Strait, E. J.
2015-01-23
DIII-D experiments show that fully penetrated resonant n=1 error field locked modes in Ohmic plasmas with safety factor q 95≳3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n=2/1) static error fields are shielded in Ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption.
Research of laser echo signal simulator
NASA Astrophysics Data System (ADS)
Xu, Rui; Shi, Rui; Wang, Xin; Li, Zhou
2015-11-01
Laser echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR. System model and time series model of laser echo signal simulator are established. Some influential factors which could induce fixed error and random error on the simulated return signals are analyzed, and then these system insertion errors are analyzed quantitatively. Using this theoretical model, the simulation system is investigated experimentally. The results corrected by subtracting fixed error indicate that the range error of the simulated laser return signal is less than 0.25m, and the distance range that the system can simulate is from 50m to 20km.
Truong, Trong-Kha; Guidon, Arnaud
2014-01-01
Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457
Effect of different head-neck-jaw postures on cervicocephalic kinesthetic sense
Zafar, Hamayun; Alghadir, Ahmad H.; Iqbal, Zaheen A.
2017-01-01
Objectives: To investigate the effect of different induced head-neck-jaw postures on head-neck relocation error among healthy subjects. Methods: 30 healthy adult male subjects participated in this study. Cervicocephalic kinesthetic sense was measured while standing, habitual sitting, habitual sitting with clenched jaw and habitual sitting with forward head posture during right rotation, left rotation, flexion and extension using kinesthetic sensibility test. Results: Head-neck relocation error was least while standing, followed by habitual sitting, habitual sitting with forward head posture and habitual sitting with jaw clenched. However, there was no significant difference in error between different tested postures during all the movements. Conclusions: To the best of our knowledge, this is the first study to see the effect of different induced head-neck-jaw postures on head-neck position sense among healthy subjects. Assuming a posture for a short duration of time doesn’t affect head-neck relocation error in normal healthy subjects. PMID:29199196
Emergence of DNA Polymerase ε Antimutators That Escape Error-Induced Extinction in Yeast
Williams, Lindsey N.; Herr, Alan J.; Preston, Bradley D.
2013-01-01
DNA polymerases (Pols) ε and δ perform the bulk of yeast leading- and lagging-strand DNA synthesis. Both Pols possess intrinsic proofreading exonucleases that edit errors during polymerization. Rare errors that elude proofreading are extended into duplex DNA and excised by the mismatch repair (MMR) system. Strains that lack Pol proofreading or MMR exhibit a 10- to 100-fold increase in spontaneous mutation rate (mutator phenotype), and inactivation of both Pol δ proofreading (pol3-01) and MMR is lethal due to replication error-induced extinction (EEX). It is unclear whether a similar synthetic lethal relationship exists between defects in Pol ε proofreading (pol2-4) and MMR. Using a plasmid-shuffling strategy in haploid Saccharomyces cerevisiae, we observed synthetic lethality of pol2-4 with alleles that completely abrogate MMR (msh2Δ, mlh1Δ, msh3Δ msh6Δ, or pms1Δ mlh3Δ) but not with partial MMR loss (msh3Δ, msh6Δ, pms1Δ, or mlh3Δ), indicating that high levels of unrepaired Pol ε errors drive extinction. However, variants that escape this error-induced extinction (eex mutants) frequently emerged. Five percent of pol2-4 msh2Δ eex mutants encoded second-site changes in Pol ε that reduced the pol2-4 mutator phenotype between 3- and 23-fold. The remaining eex alleles were extragenic to pol2-4. The locations of antimutator amino-acid changes in Pol ε and their effects on mutation spectra suggest multiple mechanisms of mutator suppression. Our data indicate that unrepaired leading- and lagging-strand polymerase errors drive extinction within a few cell divisions and suggest that there are polymerase-specific pathways of mutator suppression. The prevalence of suppressors extragenic to the Pol ε gene suggests that factors in addition to proofreading and MMR influence leading-strand DNA replication fidelity. PMID:23307893
SU-E-J-45: The Correlation Between CBCT Flat Panel Misalignment and 3D Image Guidance Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenton, O; Valdes, G; Yin, L
Purpose To simulate the impact of CBCT flat panel misalignment on the image quality, the calculated correction vectors in 3D image guided proton therapy and to determine if these calibration errors can be caught in our QA process. Methods The X-ray source and detector geometrical calibration (flexmap) file of the CBCT system in the AdaPTinsight software (IBA proton therapy) was edited to induce known changes in the rotational and translational calibrations of the imaging panel. Translations of up to ±10 mm in the x, y and z directions (see supplemental) and rotational errors of up to ±3° were induced. Themore » calibration files were then used to reconstruct the CBCT image of a pancreatic patient and CatPhan phantom. Correction vectors were calculated for the patient using the software’s auto match system and compared to baseline values. The CatPhan CBCT images were used for quantitative evaluation of image quality for each type of induced error. Results Translations of 1 to 3 mm in the x and y calibration resulted in corresponding correction vector errors of equal magnitude. Similar 10mm shifts were seen in the y-direction; however, in the x-direction, the image quality was too degraded for a match. These translational errors can be identified through differences in isocenter from orthogonal kV images taken during routine QA. Errors in the z-direction had no effect on the correction vector and image quality.Rotations of the imaging panel calibration resulted in corresponding correction vector rotations of the patient images. These rotations also resulted in degraded image quality which can be identified through quantitative image quality metrics. Conclusion Misalignment of CBCT geometry can lead to incorrect translational and rotational patient correction vectors. These errors can be identified through QA of the imaging isocenter as compared to orthogonal images combined with monitoring of CBCT image quality.« less
Five-year lidar observational results and effects of El Chichon particles on Umkehr ozone data
NASA Astrophysics Data System (ADS)
Uchino, Osamu; Tabata, Isao; Kai, Kenji; Akita, Iwao
1988-08-01
Based on the values of integrated backscattering coefficient B, obtained from the ruby lidar measurements at the Meteorological Research Institude (MRI, at Tsukuba, Japan), the effect of dust particles due to two volcanic eruptions of Mt. El Chichon in 1982 on the Umkehr ozone data at the Tateno Aerological Observatory was determined. In addition, the effects of the aerosols on the Umkehr ozone data at Arosa, Switzerland were investigated using lidar data collected at Garmisch-Partenkirchen, Germany. It was found that both stratospheric and tropospheric aerosols induced a significant negative ozone error in the uppermost layers (33-47 km), caused a small and usually negative ozone error in layers between 16 and 33 km, and induced a significant positive ozone error in layers between 6 and 16 km.
NASA Astrophysics Data System (ADS)
Sinkin, Oleg V.; Grigoryan, Vladimir S.; Menyuk, Curtis R.
2006-12-01
We introduce a fully deterministic, computationally efficient method for characterizing the effect of nonlinearity in optical fiber transmission systems that utilize wavelength-division multiplexing and return-to-zero modulation. The method accurately accounts for bit-pattern-dependent nonlinear distortion due to collision-induced timing jitter and for amplifier noise. We apply this method to calculate the error probability as a function of channel spacing in a prototypical multichannel return-to-zero undersea system.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Network Adjustment of Orbit Errors in SAR Interferometry
NASA Astrophysics Data System (ADS)
Bahr, Hermann; Hanssen, Ramon
2010-03-01
Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.
Inducible DNA-repair systems in yeast: competition for lesions.
Mitchel, R E; Morrison, D P
1987-03-01
DNA lesions may be recognized and repaired by more than one DNA-repair process. If two repair systems with different error frequencies have overlapping lesion specificity and one or both is inducible, the resulting variable competition for the lesions can change the biological consequences of these lesions. This concept was demonstrated by observing mutation in yeast cells (Saccharomyces cerevisiae) exposed to combinations of mutagens under conditions which influenced the induction of error-free recombinational repair or error-prone repair. Total mutation frequency was reduced in a manner proportional to the dose of 60Co-gamma- or 254 nm UV radiation delivered prior to or subsequent to an MNNG exposure. Suppression was greater per unit radiation dose in cells gamma-irradiated in O2 as compared to N2. A rad3 (excision-repair) mutant gave results similar to wild-type but mutation in a rad52 (rec-) mutant exposed to MNNG was not suppressed by radiation. Protein-synthesis inhibition with heat shock or cycloheximide indicated that it was the mutation due to MNNG and not that due to radiation which had changed. These results indicate that MNNG lesions are recognized by both the recombinational repair system and the inducible error-prone system, but that gamma-radiation induction of error-free recombinational repair resulted in increased competition for the lesions, thereby reducing mutation. Similarly, gamma-radiation exposure resulted in a radiation dose-dependent reduction in mutation due to MNU, EMS, ENU and 8-MOP + UVA, but no reduction in mutation due to MMS. These results suggest that the number of mutational MMS lesions recognizable by the recombinational repair system must be very small relative to those produced by the other agents. MNNG induction of the inducible error-prone systems however, did not alter mutation frequencies due to ENU or MMS exposure but, in contrast to radiation, increased the mutagenic effectiveness of EMS. These experiments demonstrate that in this lower eukaryote, mutagen exposure does not necessarily result in a fixed risk of mutation, but that the risk can be markedly influenced by a variety of external stimuli including heat shock or exposure to other mutagens.
Dopamine reward prediction error coding.
Schultz, Wolfram
2016-03-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.
Realtime mitigation of GPS SA errors using Loran-C
NASA Technical Reports Server (NTRS)
Braasch, Soo Y.
1994-01-01
The hybrid use of Loran-C with the Global Positioning System (GPS) was shown capable of providing a sole-means of enroute air radionavigation. By allowing pilots to fly direct to their destinations, use of this system is resulting in significant time savings and therefore fuel savings as well. However, a major error source limiting the accuracy of GPS is the intentional degradation of the GPS signal known as Selective Availability (SA). SA-induced position errors are highly correlated and far exceed all other error sources (horizontal position error: 100 meters, 95 percent). Realtime mitigation of SA errors from the position solution is highly desirable. How that can be achieved is discussed. The stability of Loran-C signals is exploited to reduce SA errors. The theory behind this technique is discussed and results using bench and flight data are given.
Investigating error structure of shuttle radar topography mission elevation data product
NASA Astrophysics Data System (ADS)
Becek, Kazimierz
2008-08-01
An attempt was made to experimentally assess the instrumental component of error of the C-band SRTM (SRTM). This was achieved by comparing elevation data of 302 runways from airports all over the world with the shuttle radar topography mission data product (SRTM). It was found that the rms of the instrumental error is about +/-1.55 m. Modeling of the remaining SRTM error sources, including terrain relief and pixel size, shows that downsampling from 30 m to 90 m (1 to 3 arc-sec pixels) worsened SRTM vertical accuracy threefold. It is suspected that the proximity of large metallic objects is a source of large SRTM errors. The achieved error estimates allow a pixel-based accuracy assessment of the SRTM elevation data product to be constructed. Vegetation-induced errors were not considered in this work.
Dopamine reward prediction error coding
Schultz, Wolfram
2016-01-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377
Explicitly solvable complex Chebyshev approximation problems related to sine polynomials
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.
Lexical and Semantic Binding in Verbal Short-Term Memory
ERIC Educational Resources Information Center
Jefferies, Elizabeth; Frankish, Clive R.; Ralph, Matthew A. Lambon
2006-01-01
Semantic dementia patients make numerous phoneme migration errors in their immediate serial recall of poorly comprehended words. In this study, similar errors were induced in the word recall of healthy participants by presenting unpredictable mixed lists of words and nonwords. This technique revealed that lexicality, word frequency, imageability,…
Inducing Speech Errors in Dysarthria Using Tongue Twisters
ERIC Educational Resources Information Center
Kember, Heather; Connaghan, Kathryn; Patel, Rupal
2017-01-01
Although tongue twisters have been widely use to study speech production in healthy speakers, few studies have employed this methodology for individuals with speech impairment. The present study compared tongue twister errors produced by adults with dysarthria and age-matched healthy controls. Eight speakers (four female, four male; mean age =…
Competence in Streptococcus pneumoniae is regulated by the rate of ribosomal decoding errors.
Stevens, Kathleen E; Chang, Diana; Zwack, Erin E; Sebert, Michael E
2011-01-01
Competence for genetic transformation in Streptococcus pneumoniae develops in response to accumulation of a secreted peptide pheromone and was one of the initial examples of bacterial quorum sensing. Activation of this signaling system induces not only expression of the proteins required for transformation but also the production of cellular chaperones and proteases. We have shown here that activity of this pathway is sensitively responsive to changes in the accuracy of protein synthesis that are triggered by either mutations in ribosomal proteins or exposure to antibiotics. Increasing the error rate during ribosomal decoding promoted competence, while reducing the error rate below the baseline level repressed the development of both spontaneous and antibiotic-induced competence. This pattern of regulation was promoted by the bacterial HtrA serine protease. Analysis of strains with the htrA (S234A) catalytic site mutation showed that the proteolytic activity of HtrA selectively repressed competence when translational fidelity was high but not when accuracy was low. These findings redefine the pneumococcal competence pathway as a response to errors during protein synthesis. This response has the capacity to address the immediate challenge of misfolded proteins through production of chaperones and proteases and may also be able to address, through genetic exchange, upstream coding errors that cause intrinsic protein folding defects. The competence pathway may thereby represent a strategy for dealing with lesions that impair proper protein coding and for maintaining the coding integrity of the genome. The signaling pathway that governs competence in the human respiratory tract pathogen Streptococcus pneumoniae regulates both genetic transformation and the production of cellular chaperones and proteases. The current study shows that this pathway is sensitively controlled in response to changes in the accuracy of protein synthesis. Increasing the error rate during ribosomal decoding induced competence, while decreasing the error rate repressed competence. This pattern of regulation was promoted by the HtrA protease, which selectively repressed competence when translational fidelity was high but not when accuracy was low. Our findings demonstrate that this organism is able to monitor the accuracy of information used for protein biosynthesis and suggest that errors trigger a response addressing both the immediate challenge of misfolded proteins and, through genetic exchange, upstream coding errors that may underlie protein folding defects. This pathway may represent an evolutionary strategy for maintaining the coding integrity of the genome.
Measurement errors in voice-key naming latency for Hiragana.
Yamada, Jun; Tamaoka, Katsuo
2003-12-01
This study makes explicit the limitations and possibilities of voice-key naming latency research on single hiragana symbols (a Japanese syllabic script) by examining three sets of voice-key naming data against Sakuma, Fushimi, and Tatsumi's 1997 speech-analyzer voice-waveform data. Analysis showed that voice-key measurement errors can be substantial in standard procedures as they may conceal the true effects of significant variables involved in hiragana-naming behavior. While one can avoid voice-key measurement errors to some extent by applying Sakuma, et al.'s deltas and by excluding initial phonemes which induce measurement errors, such errors may be ignored when test items are words and other higher-level linguistic materials.
NASA Astrophysics Data System (ADS)
Nunez, F.; Romero, A.; Clua, J.; Mas, J.; Tomas, A.; Catalan, A.; Castellsaguer, J.
2005-08-01
MARES (Muscle Atrophy Research and Exercise System) is a computerized ergometer for neuromuscular research to be flown and installed onboard the International Space Station in 2007. Validity of data acquired depends on controlling and reducing all significant error sources. One of them is the misalignment of the joint rotation axis with respect to the motor axis.The error induced on the measurements is proportional to the misalignment between both axis. Therefore, the restraint system's performance is critical [1]. MARES HRS (Human Restraint System) assures alignment within an acceptable range while performing the exercise (results: elbow movement:13.94mm+/-5.45, Knee movement: 22.36mm+/- 6.06 ) and reproducibility of human positioning (results: elbow movement: 2.82mm+/-1.56, Knee movement 7.45mm+/-4.8 ). These results allow limiting measurement errors induced by misalignment.
A radiation tolerant Data link board for the ATLAS Tile Cal upgrade
NASA Astrophysics Data System (ADS)
Åkerstedt, H.; Bohm, C.; Muschter, S.; Silverstein, S.; Valdes, E.
2016-01-01
This paper describes the latest, full-functionality revision of the high-speed data link board developed for the Phase-2 upgrade of ATLAS hadronic Tile Calorimeter. The link board design is highly redundant, with digital functionality implemented in two Xilinx Kintex-7 FPGAs, and two Molex QSFP+ electro-optic modules with uplinks run at 10 Gbps. The FPGAs are remotely configured through two radiation-hard CERN GBTx deserialisers (GBTx), which also provide the LHC-synchronous system clock. The redundant design eliminates virtually all single-point error modes, and a combination of triple-mode redundancy (TMR), internal and external scrubbing will provide adequate protection against radiation-induced errors. The small portion of the FPGA design that cannot be protected by TMR will be the dominant source of radiation-induced errors, even if that area is small.
Damage Initiation in Two-Dimensional, Woven, Carbon-Carbon Composites
1988-12-01
biaxial stress interaction were themselves a function of the applied biaxial stress ratio and thus the error in measuring F12 depended on F12. To find the...the supported directions. Discretizing the model will tend to induce error in the computed nodal displacements when compared to an exact continuum...solution, however, for an increasing number of elements in the structural model, the net error should converge to zero (3:94). The inherent flexibility in
NASA Astrophysics Data System (ADS)
Zhou, Yanru; Zhao, Yuxiang; Tian, Hui; Zhang, Dengwei; Huang, Tengchao; Miao, Lijun; Shu, Xiaowu; Che, Shuangliang; Liu, Cheng
2016-12-01
In an axial magnetic field (AMF), which is vertical to the plane of the fiber coil, a polarization-maintaining fiber optic gyro (PM-FOG) appears as an axial magnetic error. This error is linearly related to the intensity of an AMF, the radius of the fiber coil, and the light wavelength, and also influenced by the distribution of fiber twist. When a PM-FOG is manufactured completely, this error only appears a linear correlation with the AMF. A real-time compensation model is established to eliminate the error, and the experimental results show that the axial magnetic error of the PM-FOG is decreased from 5.83 to 0.09 deg/h in 12G AMF with 18-dB suppression.
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
6-Hydroxy dopamine does not affect lens-induced refractive errors but suppresses deprivation myopia.
Schaeffel, F; Hagel, G; Bartmann, M; Kohler, K; Zrenner, E
1994-01-01
Degradation of the retinal image by translucent occluders during postnatal development induces axial myopia in chickens, tree shrews and monkeys. Local visual deprivation produces myopia even in local regions of the eye and neither accommodation nor intact connection between the eye and the brain are necessary. Therefore, it is an important question whether a similar local-retinal pathway translating visual information into growth or stretch signals to the underlying sclera is acting to emmetropize the growing eye. It is not known until now whether occluder deprivation triggers similar eye growth (or scleral stretch) mechanisms that are also responsible for visual guidance of normal refractive development. We here report that, in chickens, 6-hydroxy dopamine suppresses deprivation-induced myopia but has no effect on the magnitude of changes in axial eye elongation that are induced by spectacle lenses. The result suggests that, in chickens with normal accommodation, two pharmacologically different feedback loops may be responsible for deprivation myopia and lens-induced refractive errors.
Calculation of cosmic ray induced single event upsets: Program CRUP (Cosmic Ray Upset Program)
NASA Astrophysics Data System (ADS)
Shapiro, P.
1983-09-01
This report documents PROGRAM CRUP, COSMIC RAY UPSET PROGRAM. The computer program calculates cosmic ray induced single-event error rates in microelectronic circuits exposed to several representative cosmic-ray environments.
Ribot-Ciscar, Edith; Aimonetti, Jean-Marc; Azulay, Jean-Philippe
2017-12-15
The present study investigates whether proprioceptive training, based on kinesthetic illusions, can help in re-educating the processing of muscle proprioceptive input, which is impaired in patients with Parkinson's disease (PD). The processing of proprioceptive input before and after training was evaluated by determining the error in the amplitude of voluntary dorsiflexion ankle movement (20°), induced by applying a vibration on the tendon of the gastrocnemius-soleus muscle (a vibration-induced movement error). The training consisted of the subjects focusing their attention upon a series of illusory movements of the ankle. Eleven PD patients and eleven age-matched control subjects were tested. Before training, vibration reduced dorsiflexion amplitude in controls by 4.3° (P<0.001); conversely, vibration was inefficient in PD's movement amplitude (reduction of 2.1°, P=0.20). After training, vibration significantly reduced the estimated movement amplitude in PD patients by 5.3° (P=0.01). This re-emergence of a vibration-induced error leads us to conclude that proprioceptive training, based on kinesthetic illusions, is a simple means for re-educating the processing of muscle proprioceptive input in PD patients. Such complementary training should be included in rehabilitation programs that presently focus on improving balance and motor performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Hong, KyungPyo; Jeong, Eun-Kee; Wall, T. Scott; Drakos, Stavros G.; Kim, Daniel
2015-01-01
Purpose To develop and evaluate a wideband arrhythmia-insensitive-rapid (AIR) pulse sequence for cardiac T1 mapping without image artifacts induced by implantable-cardioverter-defibrillator (ICD). Methods We developed a wideband AIR pulse sequence by incorporating a saturation pulse with wide frequency bandwidth (8.9 kHz), in order to achieve uniform T1 weighting in the heart with ICD. We tested the performance of original and “wideband” AIR cardiac T1 mapping pulse sequences in phantom and human experiments at 1.5T. Results In 5 phantoms representing native myocardium and blood and post-contrast blood/tissue T1 values, compared with the control T1 values measured with an inversion-recovery pulse sequence without ICD, T1 values measured with original AIR with ICD were considerably lower (absolute percent error >29%), whereas T1 values measured with wideband AIR with ICD were similar (absolute percent error <5%). Similarly, in 11 human subjects, compared with the control T1 values measured with original AIR without ICD, T1 measured with original AIR with ICD was significantly lower (absolute percent error >10.1%), whereas T1 measured with wideband AIR with ICD was similar (absolute percent error <2.0%). Conclusion This study demonstrates the feasibility of a wideband pulse sequence for cardiac T1 mapping without significant image artifacts induced by ICD. PMID:25975192
NASA Astrophysics Data System (ADS)
Ikeura, Takuro; Nozaki, Takayuki; Shiota, Yoichi; Yamamoto, Tatsuya; Imamura, Hiroshi; Kubota, Hitoshi; Fukushima, Akio; Suzuki, Yoshishige; Yuasa, Shinji
2018-04-01
Using macro-spin modeling, we studied the reduction in the write error rate (WER) of voltage-induced dynamic magnetization switching by enhancing the effective thermal stability of the free layer using a voltage-controlled magnetic anisotropy change. Marked reductions in WER can be achieved by introducing reverse bias voltage pulses both before and after the write pulse. This procedure suppresses the thermal fluctuations of magnetization in the initial and final states. The proposed reverse bias method can offer a new way of improving the writing stability of voltage-driven spintronic devices.
NASA Astrophysics Data System (ADS)
Chen, R. M.; Diggins, Z. J.; Mahatme, N. N.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Zhang, H.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.
2017-08-01
The single-event sensitivity of bulk 40-nm sequential circuits is investigated as a function of temperature and supply voltage. An overall increase in SEU cross section versus temperature is observed at relatively high supply voltages. However, at low supply voltages, there is a threshold temperature beyond which the SEU cross section decreases with further increases in temperature. Single-event transient induced errors in flip-flops also increase versus temperature at relatively high supply voltages and are more sensitive to temperature variation than those caused by single-event upsets.
Using warnings to reduce categorical false memories in younger and older adults.
Carmichael, Anna M; Gutchess, Angela H
2016-07-01
Warnings about memory errors can reduce their incidence, although past work has largely focused on associative memory errors. The current study sought to explore whether warnings could be tailored to specifically reduce false recall of categorical information in both younger and older populations. Before encoding word pairs designed to induce categorical false memories, half of the younger and older participants were warned to avoid committing these types of memory errors. Older adults who received a warning committed fewer categorical memory errors, as well as other types of semantic memory errors, than those who did not receive a warning. In contrast, young adults' memory errors did not differ for the warning versus no-warning groups. Our findings provide evidence for the effectiveness of warnings at reducing categorical memory errors in older adults, perhaps by supporting source monitoring, reduction in reliance on gist traces, or through effective metacognitive strategies.
Space charge enhanced plasma gradient effects on satellite electric field measurements
NASA Technical Reports Server (NTRS)
Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.
1991-01-01
It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.
ERIC Educational Resources Information Center
Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.
2012-01-01
Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…
Quantum error correction of continuous-variable states against Gaussian noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ralph, T. C.
2011-08-15
We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.
A regret-induced status-quo bias
Nicolle, A.; Fleming, S.M.; Bach, D.R.; Driver, J.; Dolan, R. J.
2011-01-01
A suboptimal bias towards accepting the ‘status-quo’ option in decision-making is well established behaviorally, but the underlying neural mechanisms are less clear. Behavioral evidence suggests the emotion of regret is higher when errors arise from rejection rather than acceptance of a status-quo option. Such asymmetry in the genesis of regret might drive the status-quo bias on subsequent decisions, if indeed erroneous status-quo rejections have a greater neuronal impact than erroneous status-quo acceptances. To test this, we acquired human fMRI data during a difficult perceptual decision task that incorporated a trial-to-trial intrinsic status-quo option, with explicit signaling of outcomes (error or correct). Behaviorally, experienced regret was higher after an erroneous status-quo rejection compared to acceptance. Anterior insula and medial prefrontal cortex showed increased BOLD signal after such status-quo rejection errors. In line with our hypothesis, a similar pattern of signal change predicted acceptance of the status-quo on a subsequent trial. Thus, our data link a regret-induced status-quo bias to error-related activity on the preceding trial. PMID:21368043
Astigmatism following retinal detachment surgery.
Goel, R; Crewdson, J; Chignell, A H
1983-01-01
Eighty-three patients on whom successful retinal detachment had been performed were studied to note astigmatic changes following surgery. In the majority of cases the errors following such surgery are of no great clinical importance. However, in some situations a high degree of astigmatism may be produced. This study showed that these sequelae are particularly likely after radial buckling procedures, and surgeons favouring these techniques should be aware that astigmatic errors can be induced. The astigmatic errors may persist for several years after surgery. PMID:6838807
ANSYS simulation of the capacitance coupling of quartz tuning fork gyroscope
NASA Astrophysics Data System (ADS)
Zhang, Qing; Feng, Lihui; Zhao, Ke; Cui, Fang; Sun, Yu-nan
2013-12-01
Coupling error is one of the main error sources of the quartz tuning fork gyroscope. The mechanism of capacitance coupling error is analyzed in this article. Finite Element Method (FEM) is used to simulate the structure of the quartz tuning fork by ANSYS software. The voltage output induced by the capacitance coupling is simulated with the harmonic analysis and characteristics of electrical and mechanical parameters influenced by the capacitance coupling between drive electrodes and sense electrodes are discussed with the transient analysis.
On the use of biomathematical models in patient-specific IMRT dose QA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhen Heming; Nelms, Benjamin E.; Tome, Wolfgang A.
2013-07-15
Purpose: To investigate the use of biomathematical models such as tumor control probability (TCP) and normal tissue complication probability (NTCP) as new quality assurance (QA) metrics.Methods: Five different types of error (MLC transmission, MLC penumbra, MLC tongue and groove, machine output, and MLC position) were intentionally induced to 40 clinical intensity modulated radiation therapy (IMRT) patient plans (20 H and N cases and 20 prostate cases) to simulate both treatment planning system errors and machine delivery errors in the IMRT QA process. The changes in TCP and NTCP for eight different anatomic structures (H and N: CTV, GTV, both parotids,more » spinal cord, larynx; prostate: CTV, rectal wall) were calculated as the new QA metrics to quantify the clinical impact on patients. The correlation between the change in TCP/NTCP and the change in selected DVH values was also evaluated. The relation between TCP/NTCP change and the characteristics of the TCP/NTCP curves is discussed.Results:{Delta}TCP and {Delta}NTCP were summarized for each type of induced error and each structure. The changes/degradations in TCP and NTCP caused by the errors vary widely depending on dose patterns unique to each plan, and are good indicators of each plan's 'robustness' to that type of error.Conclusions: In this in silico QA study the authors have demonstrated the possibility of using biomathematical models not only as patient-specific QA metrics but also as objective indicators that quantify, pretreatment, a plan's robustness with respect to possible error types.« less
CCD image sensor induced error in PIV applications
NASA Astrophysics Data System (ADS)
Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.
2014-06-01
The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.
Functional language shift to the right hemisphere in patients with language-eloquent brain tumors.
Krieg, Sandro M; Sollmann, Nico; Hauck, Theresa; Ille, Sebastian; Foerschler, Annette; Meyer, Bernhard; Ringel, Florian
2013-01-01
Language function is mainly located within the left hemisphere of the brain, especially in right-handed subjects. However, functional MRI (fMRI) has demonstrated changes of language organization in patients with left-sided perisylvian lesions to the right hemisphere. Because intracerebral lesions can impair fMRI, this study was designed to investigate human language plasticity with a virtual lesion model using repetitive navigated transcranial magnetic stimulation (rTMS). Fifteen patients with lesions of left-sided language-eloquent brain areas and 50 healthy and purely right-handed participants underwent bilateral rTMS language mapping via an object-naming task. All patients were proven to have left-sided language function during awake surgery. The rTMS-induced language errors were categorized into 6 different error types. The error ratio (induced errors/number of stimulations) was determined for each brain region on both hemispheres. A hemispheric dominance ratio was then defined for each region as the quotient of the error ratio (left/right) of the corresponding area of both hemispheres (ratio >1 = left dominant; ratio <1 = right dominant). Patients with language-eloquent lesions showed a statistically significantly lower ratio than healthy participants concerning "all errors" and "all errors without hesitations", which indicates a higher participation of the right hemisphere in language function. Yet, there was no cortical region with pronounced difference in language dominance compared to the whole hemisphere. This is the first study that shows by means of an anatomically accurate virtual lesion model that a shift of language function to the non-dominant hemisphere can occur.
NASA Astrophysics Data System (ADS)
Richter, J.; Mayer, J.; Weigand, B.
2018-02-01
Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.
Niccum, Brittany A; Lee, Heewook; MohammedIsmail, Wazim; Tang, Haixu; Foster, Patricia L
2018-06-15
When the DNA polymerase that replicates the Escherichia coli chromosome, DNA Pol III, makes an error, there are two primary defenses against mutation: proofreading by the epsilon subunit of the holoenzyme and mismatch repair. In proofreading deficient strains, mismatch repair is partially saturated and the cell's response to DNA damage, the SOS response, may be partially induced. To investigate the nature of replication errors, we used mutation accumulation experiments and whole genome sequencing to determine mutation rates and mutational spectra across the entire chromosome of strains deficient in proofreading, mismatch repair, and the SOS response. We report that a proofreading-deficient strain has a mutation rate 4,000-fold greater than wild-type strains. While the SOS response may be induced in these cells, it does not contribute to the mutational load. Inactivating mismatch repair in a proofreading-deficient strain increases the mutation rate another 1.5-fold. DNA polymerase has a bias for converting G:C to A:T base pairs, but proofreading reduces the impact of these mutations, helping to maintain the genomic G:C content. These findings give an unprecedented view of how polymerase and error-correction pathways work together to maintain E. coli' s low mutation rate of 1 per thousand generations. Copyright © 2018, Genetics.
MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard
2016-01-01
Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600
Diuk, Carlos; Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew; Niv, Yael
2013-03-27
Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously.
An Analysis of Ripple and Error Fields Induced by a Blanket in the CFETR
NASA Astrophysics Data System (ADS)
Yu, Guanying; Liu, Xufeng; Liu, Songlin
2016-10-01
The Chinese Fusion Engineering Tokamak Reactor (CFETR) is an important intermediate device between ITER and DEMO. The Water Cooled Ceramic Breeder (WCCB) blanket whose structural material is mainly made of Reduced Activation Ferritic/Martensitic (RAFM) steel, is one of the candidate conceptual blanket design. An analysis of ripple and error field induced by RAFM steel in WCCB is evaluated with the method of static magnetic analysis in the ANSYS code. Significant additional magnetic field is produced by blanket and it leads to an increased ripple field. Maximum ripple along the separatrix line reaches 0.53% which is higher than 0.5% of the acceptable design value. Simultaneously, one blanket module is taken out for heating purpose and the resulting error field is calculated to be seriously against the requirement. supported by National Natural Science Foundation of China (No. 11175207) and the National Magnetic Confinement Fusion Program of China (No. 2013GB108004)
NASA Technical Reports Server (NTRS)
Green, Del L.; Walker, Eric L.; Everhart, Joel L.
2006-01-01
Minimization of uncertainty is essential to extend the usable range of the 15-psid Electronically Scanned Pressure [ESP) transducer measurements to the low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources inducing much of this uncertainty requires a well defined and controlled calibration method. Employing such a controlled calibration system, several studies were conducted that provide quantitative information detailing the required controls needed to minimize environmental and human induced error sources. Results of temperature, environmental pressure, over-pressurization, and set point randomization studies for the 15-psid transducers are presented along with a comparison of two regression methods using data acquired with both 0.36-psid and 15-psid transducers. Together these results provide insight into procedural and environmental controls required for long term high-accuracy pressure measurements near 0.01 psia in the hypersonic testing environment using 15-psid ESP transducers.
NASA Technical Reports Server (NTRS)
Green, Del L.; Walker, Eric L.; Everhart, Joel L.
2006-01-01
Minimization of uncertainty is essential to extend the usable range of the 15-psid Electronically Scanned Pressure (ESP) transducer measurements to the low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources inducing much of this uncertainty requires a well defined and controlled calibration method. Employing such a controlled calibration system, several studies were conducted that provide quantitative information detailing the required controls needed to minimize environmental and human induced error sources. Results of temperature, environmental pressure, over-pressurization, and set point randomization studies for the 15-psid transducers are presented along with a comparison of two regression methods using data acquired with both 0.36-psid and 15-psid transducers. Together these results provide insight into procedural and environmental controls required for long term high-accuracy pressure measurements near 0.01 psia in the hypersonic testing environment using 15-psid ESP transducers.
Middione, Matthew J; Thompson, Richard B; Ennis, Daniel B
2014-06-01
To investigate a novel phase-contrast MRI velocity-encoding technique for faster imaging and reduced chemical shift-induced phase errors. Velocity encoding with the slice select refocusing gradient achieves the target gradient moment by time shifting the refocusing gradient, which enables the use of the minimum in-phase echo time (TE) for faster imaging and reduced chemical shift-induced phase errors. Net forward flow was compared in 10 healthy subjects (N = 10) within the ascending aorta (aAo), main pulmonary artery (PA), and right/left pulmonary arteries (RPA/LPA) using conventional flow compensated and flow encoded (401 Hz/px and TE = 3.08 ms) and slice select refocused gradient velocity encoding (814 Hz/px and TE = 2.46 ms) at 3 T. Improved net forward flow agreement was measured across all vessels for slice select refocused gradient compared to flow compensated and flow encoded: aAo vs. PA (1.7% ± 1.9% vs. 5.8% ± 2.8%, P = 0.002), aAo vs. RPA + LPA (2.1% ± 1.7% vs. 6.0% ± 4.3%, P = 0.03), and PA vs. RPA + LPA (2.9% ± 2.1% vs. 6.1% ± 6.3%, P = 0.04), while increasing temporal resolution (35%) and signal-to-noise ratio (33%). Slice select refocused gradient phase-contrast MRI with a high receiver bandwidth and minimum in-phase TE provides more accurate and less variable flow measurements through the reduction of chemical shift-induced phase errors and a reduced TE/repetition time, which can be used to increase the temporal/spatial resolution and/or reduce breath hold durations. Copyright © 2013 Wiley Periodicals, Inc.
Gibbs, P E; Kilbey, B J; Banerjee, S K; Lawrence, C W
1993-05-01
We have compared the mutagenic properties of a T-T cyclobutane dimer in baker's yeast, Saccharomyces cerevisiae, with those in Escherichia coli by transforming each of these species with the same single-stranded shuttle vector carrying either the cis-syn or the trans-syn isomer of this UV photoproduct at a unique site. The mutagenic properties investigated were the frequency of replicational bypass of the photoproduct, the error rate of bypass, and the mutation spectrum. In SOS-induced E. coli, the cis-syn dimer was bypassed in approximately 16% of the vector molecules, and 7.6% of the bypass products had targeted mutations. In S. cerevisiae, however, bypass occurred in about 80% of these molecules, and the bypass was at least 19-fold more accurate (approximately 0.4% targeted mutations). Each of these yeast mutations was a single unique event, and none were like those in E. coli, suggesting that in fact the difference in error rate is much greater. Bypass of the trans-syn dimer occurred in about 17% of the vector molecules in both species, but with this isomer the error rate was higher in S. cerevisiae (21 to 36% targeted mutations) than in E. coli (13%). However, the spectra of mutations induced by the latter photoproduct were virtually identical in the two organisms. We conclude that bypass and error frequencies are determined both by the structure of the photoproduct-containing template and by the particular replication proteins concerned but that the types of mutations induced depend predominantly on the structure of the template. Unlike E. coli, bypass in S. cerevisiae did not require UV-induced functions.
NASA Technical Reports Server (NTRS)
1987-01-01
In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.
Reversal of photon-scattering errors in atomic qubits.
Akerman, N; Kotler, S; Glickman, Y; Ozeri, R
2012-09-07
Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured.
Wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope system
NASA Astrophysics Data System (ADS)
Wei, Kai; Zhang, Xuejun; Xian, Hao; Rao, Changhui; Zhang, Yudong
2010-05-01
We present the wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope. The error budget accounts for aberrations induced by optical design residual, manufacturing error, mounting effects, and misalignments. The initial error budget has been generated from the top-down. There will also be an ongoing effort to track the errors from the bottom-up. This will aid in identifying critical areas of concern. The resolution of conflicts will involve a continual process of review and comparison of the top-down and bottom-up approaches, modifying both as needed to meet the top level requirements in the end. As we all know, the adaptive optical system will correct for some of the telescope system imperfections but it cannot be assumed that all errors will be corrected. Therefore, two kinds of error budgets will be presented, one is non-AO top-down error budget and the other is with-AO system error budget. The main advantage of the method is that at the same time it describes the final performance of the telescope, and gives to the optical manufacturer the maximum freedom to define and possibly modify its own manufacturing error budget.
Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi
2016-12-01
Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bishop, Lauri; Khan, Moiz; Martelli, Dario; Quinn, Lori; Stein, Joel; Agrawal, Sunil
2017-10-01
Many robotic devices in rehabilitation incorporate an assist-as-needed haptic guidance paradigm to promote training. This error reduction model, while beneficial for skill acquisition, could be detrimental for long-term retention. Error augmentation (EA) models have been explored as alternatives. A robotic Tethered Pelvic Assist Device has been developed to study force application to the pelvis on gait and was used here to induce weight shift onto the paretic (error reduction) or nonparetic (error augmentation) limb during treadmill training. The purpose of these case reports is to examine effects of training with these two paradigms to reduce load force asymmetry during gait in two individuals after stroke (>6 mos). Participants presented with baseline gait asymmetry, although independent community ambulators. Participants underwent 1-hr trainings for 3 days using either the error reduction or error augmentation model. Outcomes included the Borg rating of perceived exertion scale for treatment tolerance and measures of force and stance symmetry. Both participants tolerated training. Force symmetry (measured on treadmill) improved from pretraining to posttraining (36.58% and 14.64% gains), however, with limited transfer to overground gait measures (stance symmetry gains of 9.74% and 16.21%). Training with the Tethered Pelvic Assist Device device proved feasible to improve force symmetry on the treadmill irrespective of training model. Future work should consider methods to increase transfer to overground gait.
Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series
NASA Astrophysics Data System (ADS)
Sugihara, George; May, Robert M.
1990-04-01
An approach is presented for making short-term predictions about the trajectories of chaotic dynamical systems. The method is applied to data on measles, chickenpox, and marine phytoplankton populations, to show how apparent noise associated with deterministic chaos can be distinguished from sampling error and other sources of externally induced environmental noise.
ERIC Educational Resources Information Center
Smalle, Eleonore H. M.; Muylle, Merel; Szmalec, Arnaud; Duyck, Wouter
2017-01-01
Speech errors typically respect the speaker's implicit knowledge of language-wide phonotactics (e.g., /t/ cannot be a syllable onset in the English language). Previous work demonstrated that adults can learn novel experimentally induced phonotactic constraints by producing syllable strings in which the allowable position of a phoneme depends on…
Cullen, Jared; Lobo, Charlene J; Ford, Michael J; Toth, Milos
2015-09-30
Electron-beam-induced deposition (EBID) is a direct-write chemical vapor deposition technique in which an electron beam is used for precursor dissociation. Here we show that Arrhenius analysis of the deposition rates of nanostructures grown by EBID can be used to deduce the diffusion energies and corresponding preexponential factors of EBID precursor molecules. We explain the limitations of this approach, define growth conditions needed to minimize errors, and explain why the errors increase systematically as EBID parameters diverge from ideal growth conditions. Under suitable deposition conditions, EBID can be used as a localized technique for analysis of adsorption barriers and prefactors.
Schaeffel, F; Bartmann, M; Hagel, G; Zrenner, E
1995-05-01
We have found that development of both deprivation-induced and lens-induced refractive errors in chickens implicates changes of the diurnal growth rhythms in the eye (Fig. 1). Because the major diurnal oscillator in the eye is expressed by the retinal dopamine/melatonin system, effects of drugs were studied that change retinal dopamine and/or serotonin levels. Vehicle-injected and drug-injected eyes treated with either translucent occluders or lenses were compared to focus on visual growth mechanisms. Retinal biogenic amine levels were measured at the end of each experiment by HPLC with electrochemical detection. For reserpine (which was most extensively studied) electroretinograms were recorded to test retinal function [Fig. 3 (C)] and catecholaminergic and serotonergic retinal neurons were observed by immunohistochemical labelling [Fig. 3(D)]. Deprivation myopia was readily altered by a single intravitreal injection of drugs that affected retinal dopamine or serotonin levels; reserpine which depleted both serotonin and dopamine stores blocked deprivation myopia very efficiently [Fig. 3(A)], whereas 5,7-dihydroxy-tryptamine (5,7-DHT), sulpiride, melatonin and Sch23390 could enhance deprivation myopia (Table 1, Fig. 5). In contrast to other procedures that were previously employed to block deprivation myopia (6-OHDA injections or continuous light) and which had no significant effect on lens-induced refractive errors, reserpine also affected lens-induced changes in eye growth. At lower doses, the effect was selective for negative lenses (Fig. 4). We found that the individual retinal dopamine levels were very variable among individuals but were correlated in both eyes of an animal; a similar variability was previously found with regard to deprivation myopia. To test a hypothesis raised by Li, Schaeffel, Kohler and Zrenner [(1992) Visual Neuroscience, 9, 483-492] that individual dopamine levels might determine the susceptibility to deprivation myopia, refractive errors were correlated with dopamine levels in occluded and untreated eyes of monocularly deprived chickens (Fig. 6). The hypothesis was rejected. Although it has been previously found that the static retinal tissue levels of dopamine are not altered by lens treatment, subtle changes in the ratio of DOPAC to dopamine were detected in the present study. The result indicates that retinal dopamine might be implicated also in lens-induced growth changes. Surprisingly, the changes were in the opposite direction for deprivation and negative lenses although both produce myopia. Currently, there is evidence that deprivation-induced and lens-induced refractive errors in chicks are produced by different mechanisms. However, findings (1), (3) and (5) suggest that there may also be common features. Although it has not yet been resolved how both mechanisms merge to produce the appropriate axial eye growth rates, we propose a scheme (Fig. 7).
A crowdsourcing workflow for extracting chemical-induced disease relations from free text
Li, Tong Shu; Bravo, Àlex; Furlong, Laura I.; Good, Benjamin M.; Su, Andrew I.
2016-01-01
Relations between chemicals and diseases are one of the most queried biomedical interactions. Although expert manual curation is the standard method for extracting these relations from the literature, it is expensive and impractical to apply to large numbers of documents, and therefore alternative methods are required. We describe here a crowdsourcing workflow for extracting chemical-induced disease relations from free text as part of the BioCreative V Chemical Disease Relation challenge. Five non-expert workers on the CrowdFlower platform were shown each potential chemical-induced disease relation highlighted in the original source text and asked to make binary judgments about whether the text supported the relation. Worker responses were aggregated through voting, and relations receiving four or more votes were predicted as true. On the official evaluation dataset of 500 PubMed abstracts, the crowd attained a 0.505 F-score (0.475 precision, 0.540 recall), with a maximum theoretical recall of 0.751 due to errors with named entity recognition. The total crowdsourcing cost was $1290.67 ($2.58 per abstract) and took a total of 7 h. A qualitative error analysis revealed that 46.66% of sampled errors were due to task limitations and gold standard errors, indicating that performance can still be improved. All code and results are publicly available at https://github.com/SuLab/crowd_cid_relex Database URL: https://github.com/SuLab/crowd_cid_relex PMID:27087308
Ruschke, Stefan; Eggers, Holger; Kooijman, Hendrik; Diefenbach, Maximilian N; Baum, Thomas; Haase, Axel; Rummeny, Ernst J; Hu, Houchun H; Karampinos, Dimitrios C
2017-09-01
To propose a phase error correction scheme for monopolar time-interleaved multi-echo gradient echo water-fat imaging that allows accurate and robust complex-based quantification of the proton density fat fraction (PDFF). A three-step phase correction scheme is proposed to address a) a phase term induced by echo misalignments that can be measured with a reference scan using reversed readout polarity, b) a phase term induced by the concomitant gradient field that can be predicted from the gradient waveforms, and c) a phase offset between time-interleaved echo trains. Simulations were carried out to characterize the concomitant gradient field-induced PDFF bias and the performance estimating the phase offset between time-interleaved echo trains. Phantom experiments and in vivo liver and thigh imaging were performed to study the relevance of each of the three phase correction steps on PDFF accuracy and robustness. The simulation, phantom, and in vivo results showed in agreement with the theory an echo time-dependent PDFF bias introduced by the three phase error sources. The proposed phase correction scheme was found to provide accurate PDFF estimation independent of the employed echo time combination. Complex-based time-interleaved water-fat imaging was found to give accurate and robust PDFF measurements after applying the proposed phase error correction scheme. Magn Reson Med 78:984-996, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew
2013-01-01
Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously. PMID:23536092
Takahashi, Yuji K.; Langdon, Angela J.; Niv, Yael; Schoenbaum, Geoffrey
2016-01-01
Summary Dopamine neurons signal reward prediction errors. This requires accurate reward predictions. It has been suggested that the ventral striatum provides these predictions. Here we tested this hypothesis by recording from putative dopamine neurons in the VTA of rats performing a task in which prediction errors were induced by shifting reward timing or number. In controls, the neurons exhibited error signals in response to both manipulations. However, dopamine neurons in rats with ipsilateral ventral striatal lesions exhibited errors only to changes in number and failed to respond to changes in timing of reward. These results, supported by computational modeling, indicate that predictions about the temporal specificity and the number of expected rewards are dissociable, and that dopaminergic prediction-error signals rely on the ventral striatum for the former but not the latter. PMID:27292535
A new methodology for vibration error compensation of optical encoders.
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new "ad hoc" methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained.
Research on effects of phase error in phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Wang, Hongjun; Wang, Zhao; Zhao, Hong; Tian, Ailing; Liu, Bingcai
2007-12-01
Referring to phase-shifting interferometry technology, the phase shifting error from the phase shifter is the main factor that directly affects the measurement accuracy of the phase shifting interferometer. In this paper, the resources and sorts of phase shifting error were introduction, and some methods to eliminate errors were mentioned. Based on the theory of phase shifting interferometry, the effects of phase shifting error were analyzed in detail. The Liquid Crystal Display (LCD) as a new shifter has advantage as that the phase shifting can be controlled digitally without any mechanical moving and rotating element. By changing coded image displayed on LCD, the phase shifting in measuring system was induced. LCD's phase modulation characteristic was analyzed in theory and tested. Based on Fourier transform, the effect model of phase error coming from LCD was established in four-step phase shifting interferometry. And the error range was obtained. In order to reduce error, a new error compensation algorithm was put forward. With this method, the error can be obtained by process interferogram. The interferogram can be compensated, and the measurement results can be obtained by four-step phase shifting interferogram. Theoretical analysis and simulation results demonstrate the feasibility of this approach to improve measurement accuracy.
Mismeasurement and the resonance of strong confounders: correlated errors.
Marshall, J R; Hastrup, J L; Ross, J S
1999-07-01
Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.
NASA Astrophysics Data System (ADS)
Werdiger, M.; Arad, B.; Moshe, E.; Eliezer, S.
1995-02-01
A simple optical method for measurements of high-irradiance (3×1013 W cm-2) laser-induced shock waves is described. The shock wave velocity (~13 km s-1) was measured with an error not exceeding 5%. The laser-induced one-to-two-dimensional (1D-to-2D) shock wave transition was studied.
Chronic low-dose ultraviolet-induced mutagenesis in nucleotide excision repair-deficient cells.
Haruta, Nami; Kubota, Yoshino; Hishida, Takashi
2012-09-01
UV radiation induces two major types of DNA lesions, cyclobutane pyrimidine dimers (CPDs) and 6-4 pyrimidine-pyrimidine photoproducts, which are both primarily repaired by nucleotide excision repair (NER). Here, we investigated how chronic low-dose UV (CLUV)-induced mutagenesis occurs in rad14Δ NER-deficient yeast cells, which lack the yeast orthologue of human xeroderma pigmentosum A (XPA). The results show that rad14Δ cells have a marked increase in CLUV-induced mutations, most of which are C→T transitions in the template strand for transcription. Unexpectedly, many of the CLUV-induced C→T mutations in rad14Δ cells are dependent on translesion synthesis (TLS) DNA polymerase η, encoded by RAD30, despite its previously established role in error-free TLS. Furthermore, we demonstrate that deamination of cytosine-containing CPDs contributes to CLUV-induced mutagenesis. Taken together, these results uncover a novel role for Polη in the induction of C→T transitions through deamination of cytosine-containing CPDs in CLUV-exposed NER deficient cells. More generally, our data suggest that Polη can act as both an error-free and a mutagenic DNA polymerase, depending on whether the NER pathway is available to efficiently repair damaged templates.
Moors, Pieter
2015-01-01
In a recent functional magnetic resonance imaging study, Kok and de Lange (2014) observed that BOLD activity for a Kanizsa illusory shape stimulus, in which pacmen-like inducers elicit an illusory shape percept, was either enhanced or suppressed relative to a nonillusory control configuration depending on whether the spatial profile of BOLD activity in early visual cortex was related to the illusory shape or the inducers, respectively. The authors argued that these findings fit well with the predictive coding framework, because top-down predictions related to the illusory shape are not met with bottom-up sensory input and hence the feedforward error signal is enhanced. Conversely, for the inducing elements, there is a match between top-down predictions and input, leading to a decrease in error. Rather than invoking predictive coding as the explanatory framework, the suppressive effect related to the inducers might be caused by neural adaptation to perceptually stable input due to the trial sequence used in the experiment.
NASA Technical Reports Server (NTRS)
Mishchenko, M. I.; Lacis, A. A.; Travis, L. D.
1994-01-01
Although neglecting polarization and replacing the rigorous vector radiative transfer equation by its approximate scalar counterpart has no physical background, it is a widely used simplification when the incident light is unpolarized and only the intensity of the reflected light is to be computed. We employ accurate vector and scalar multiple-scattering calculations to perform a systematic study of the errors induced by the neglect of polarization in radiance calculations for a homogeneous, plane-parallel Rayleigh-scattering atmosphere (with and without depolarization) above a Lambertian surface. Specifically, we calculate percent errors in the reflected intensity for various directions of light incidence and reflection, optical thicknesses of the atmosphere, single-scattering albedos, depolarization factors, and surface albedos. The numerical data displayed can be used to decide whether or not the scalar approximation may be employed depending on the parameters of the problem. We show that the errors decrease with increasing depolarization factor and/or increasing surface albedo. For conservative or nearly conservative scattering and small surface albedos, the errors are maximum at optical thicknesses of about 1. The calculated errors may be too large for some practical applications, and, therefore, rigorous vector calculations should be employed whenever possible. However, if approximate scalar calculations are used, we recommend to avoid geometries involving phase angles equal or close to 0 deg and 90 deg, where the errors are especially significant. We propose a theoretical explanation of the large vector/scalar differences in the case of Rayleigh scattering. According to this explanation, the differences are caused by the particular structure of the Rayleigh scattering matrix and come from lower-order (except first-order) light scattering paths involving right scattering angles and right-angle rotations of the scattering plane.
Analysis of the PLL phase error in presence of simulated ionospheric scintillation events
NASA Astrophysics Data System (ADS)
Forte, B.
2012-01-01
The functioning of standard phase locked loops (PLL), including those used to track radio signals from Global Navigation Satellite Systems (GNSS), is based on a linear approximation which holds in presence of small phase errors. Such an approximation represents a reasonable assumption in most of the propagation channels. However, in presence of a fading channel the phase error may become large, making the linear approximation no longer valid. The PLL is then expected to operate in a non-linear regime. As PLLs are generally designed and expected to operate in their linear regime, whenever the non-linear regime comes into play, they will experience a serious limitation in their capability to track the corresponding signals. The phase error and the performance of a typical PLL embedded into a commercial multiconstellation GNSS receiver were analyzed in presence of simulated ionospheric scintillation. Large phase errors occurred during scintillation-induced signal fluctuations although cycle slips only occurred during the signal re-acquisition after a loss of lock. Losses of lock occurred whenever the signal faded below the minimumC/N0threshold allowed for tracking. The simulations were performed for different signals (GPS L1C/A, GPS L2C, GPS L5 and Galileo L1). L5 and L2C proved to be weaker than L1. It appeared evident that the conditions driving the PLL phase error in the specific case of GPS receivers in presence of scintillation-induced signal perturbations need to be evaluated in terms of the combination of the minimumC/N0 tracking threshold, lock detector thresholds, possible cycle slips in the tracking PLL and accuracy of the observables (i.e. the error propagation onto the observables stage).
Errors as a Means of Reducing Impulsive Food Choice.
Sellitto, Manuela; di Pellegrino, Giuseppe
2016-06-05
Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.
Errors as a Means of Reducing Impulsive Food Choice
Sellitto, Manuela; di Pellegrino, Giuseppe
2016-01-01
Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities. PMID:27341281
NASA Astrophysics Data System (ADS)
Wozniak, Kaitlin T.; Germer, Thomas A.; Butler, Sam C.; Brooks, Daniel R.; Huxlin, Krystel R.; Ellis, Jonathan D.
2018-02-01
We present measurements of light scatter induced by a new ultrafast laser technique being developed for laser refractive correction in transparent ophthalmic materials such as cornea, contact lenses, and/or intraocular lenses. In this new technique, called intra-tissue refractive index shaping (IRIS), a 405 nm femtosecond laser is focused and scanned below the corneal surface, inducing a spatially-varying refractive index change that corrects vision errors. In contrast with traditional laser correction techniques, such as laser in-situ keratomileusis (LASIK) or photorefractive keratectomy (PRK), IRIS does not operate via photoablation, but rather changes the refractive index of transparent materials such as cornea and hydrogels. A concern with any laser eye correction technique is additional scatter induced by the process, which can adversely affect vision, especially at night. The goal of this investigation is to identify sources of scatter induced by IRIS and to mitigate possible effects on visual performance in ophthalmic applications. Preliminary light scattering measurements on patterns written into hydrogel showed four sources of scatter, differentiated by distinct behaviors: (1) scattering from scanned lines; (2) scattering from stitching errors, resulting from adjacent scanning fields not being aligned to one another; (3) diffraction from Fresnel zone discontinuities; and (4) long-period variations in the scans that created distinct diffraction peaks, likely due to inconsistent line spacing in the writing instrument. By knowing the nature of these different scattering errors, it will now be possible to modify and optimize the design of IRIS structures to mitigate potential deficits in visual performance in human clinical trials.
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Bit-error rate for free-space adaptive optics laser communications.
Tyson, Robert K
2002-04-01
An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.
Radiation Tests on 2Gb NAND Flash Memories
NASA Technical Reports Server (NTRS)
Nguyen, Duc N.; Guertin, Steven M.; Patterson, J. D.
2006-01-01
We report on SEE and TID tests of highly scaled Samsung 2Gbits flash memories. Both in-situ and biased interval irradiations were used to characterize the response of the total accumulated dose failures. The radiation-induced failures can be categorized as followings: single event upset (SEU) read errors in biased and unbiased modes, write errors, and single-event-functional-interrupt (SEFI) failures.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent
2016-04-01
Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Borycki, Elizabeth M; Kushniruk, Andre W; Kuwata, Shigeki; Kannry, Joseph
2011-01-01
Electronic health records (EHRs) promise to improve and streamline healthcare through electronic entry and retrieval of patient data. Furthermore, based on a number of studies showing their positive benefits, they promise to reduce medical error and make healthcare safer. However, a growing body of literature has clearly documented that if EHRS are not designed properly and with usability as an important goal in their design, rather than reducing error, EHR deployment has the potential to actually increase medical error. In this paper we describe our approach to engineering (and reengineering) EHRs in order to increase their beneficial potential while at the same time improving their safety. The approach described in this paper involves an integration of the methods of usability analysis with video analysis of end users interacting with EHR systems and extends the evaluation of the usability of EHRs to include the assessment of the impact of these systems on work practices. Using clinical simulations, we analyze human-computer interaction in real healthcare settings (in a portable, low-cost and high fidelity manner) and include both artificial and naturalistic data collection to identify potential usability problems and sources of technology-induced error prior to widespread system release. Two case studies where the methods we have developed and refined have been applied at different levels of user-computer interaction are described.
Radiation-induced refraction artifacts in the optical CT readout of polymer gel dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Warren G.; Jirasek, Andrew, E-mail: jirasek@uvic.ca; Wells, Derek M.
2014-11-01
Purpose: The objective of this work is to demonstrate imaging artifacts that can occur during the optical computed tomography (CT) scanning of polymer gel dosimeters due to radiation-induced refractive index (RI) changes in polyacrylamide gels. Methods: A 1 L cylindrical polyacrylamide gel dosimeter was irradiated with 3 × 3 cm{sup 2} square beams of 6 MV photons. A prototype fan-beam optical CT scanner was used to image the dosimeter. Investigative optical CT scans were performed to examine two types of rayline bending: (i) bending within the plane of the fan-beam and (ii) bending out the plane of the fan-beam. Tomore » address structured errors, an iterative Savitzky–Golay (ISG) filtering routine was designed to filter 2D projections in sinogram space. For comparison, 2D projections were alternatively filtered using an adaptive-mean (AM) filter. Results: In-plane rayline bending was most notably observed in optical CT projections where rays of the fan-beam confronted a sustained dose gradient that was perpendicular to their trajectory but within the fan-beam plane. These errors caused distinct streaking artifacts in image reconstructions due to the refraction of higher intensity rays toward more opaque regions of the dosimeter. Out-of-plane rayline bending was observed in slices of the dosimeter that featured dose gradients perpendicular to the plane of the fan-beam. These errors caused widespread, severe overestimations of dose in image reconstructions due to the higher-than-actual opacity that is perceived by the scanner when light is bent off of the detector array. The ISG filtering routine outperformed AM filtering for both in-plane and out-of-plane rayline errors caused by radiation-induced RI changes. For in-plane rayline errors, streaks in an irradiated region (>7 Gy) were as high as 49% for unfiltered data, 14% for AM, and 6% for ISG. For out-of-plane rayline errors, overestimations of dose in a low-dose region (∼50 cGy) were as high as 13 Gy for unfiltered data, 10 Gy for AM, and 3.1 Gy for ISG. The ISG routine also addressed unrelated artifacts that previously needed to be manually removed in sinogram space. However, the ISG routine blurred reconstructions, causing losses in spatial resolution of ∼5 mm in the plane of the fan-beam and ∼8 mm perpendicular to the fan-beam. Conclusions: This paper reveals a new category of imaging artifacts that can affect the optical CT readout of polyacrylamide gel dosimeters. Investigative scans show that radiation-induced RI changes can cause significant rayline errors when rays confront a prolonged dose gradient that runs perpendicular to their trajectory. In fan-beam optical CT, these errors manifested in two ways: (1) distinct streaking artifacts caused by in-plane rayline bending and (2) severe overestimations of opacity caused by rays bending out of the fan-beam plane and missing the detector array. Although the ISG filtering routine mitigated these errors better than an adaptive-mean filtering routine, it caused unacceptable losses in spatial resolution.« less
Radiation-induced refraction artifacts in the optical CT readout of polymer gel dosimeters.
Campbell, Warren G; Wells, Derek M; Jirasek, Andrew
2014-11-01
The objective of this work is to demonstrate imaging artifacts that can occur during the optical computed tomography (CT) scanning of polymer gel dosimeters due to radiation-induced refractive index (RI) changes in polyacrylamide gels. A 1 L cylindrical polyacrylamide gel dosimeter was irradiated with 3 × 3 cm(2) square beams of 6 MV photons. A prototype fan-beam optical CT scanner was used to image the dosimeter. Investigative optical CT scans were performed to examine two types of rayline bending: (i) bending within the plane of the fan-beam and (ii) bending out the plane of the fan-beam. To address structured errors, an iterative Savitzky-Golay (ISG) filtering routine was designed to filter 2D projections in sinogram space. For comparison, 2D projections were alternatively filtered using an adaptive-mean (AM) filter. In-plane rayline bending was most notably observed in optical CT projections where rays of the fan-beam confronted a sustained dose gradient that was perpendicular to their trajectory but within the fan-beam plane. These errors caused distinct streaking artifacts in image reconstructions due to the refraction of higher intensity rays toward more opaque regions of the dosimeter. Out-of-plane rayline bending was observed in slices of the dosimeter that featured dose gradients perpendicular to the plane of the fan-beam. These errors caused widespread, severe overestimations of dose in image reconstructions due to the higher-than-actual opacity that is perceived by the scanner when light is bent off of the detector array. The ISG filtering routine outperformed AM filtering for both in-plane and out-of-plane rayline errors caused by radiation-induced RI changes. For in-plane rayline errors, streaks in an irradiated region (>7 Gy) were as high as 49% for unfiltered data, 14% for AM, and 6% for ISG. For out-of-plane rayline errors, overestimations of dose in a low-dose region (∼50 cGy) were as high as 13 Gy for unfiltered data, 10 Gy for AM, and 3.1 Gy for ISG. The ISG routine also addressed unrelated artifacts that previously needed to be manually removed in sinogram space. However, the ISG routine blurred reconstructions, causing losses in spatial resolution of ∼5 mm in the plane of the fan-beam and ∼8 mm perpendicular to the fan-beam. This paper reveals a new category of imaging artifacts that can affect the optical CT readout of polyacrylamide gel dosimeters. Investigative scans show that radiation-induced RI changes can cause significant rayline errors when rays confront a prolonged dose gradient that runs perpendicular to their trajectory. In fan-beam optical CT, these errors manifested in two ways: (1) distinct streaking artifacts caused by in-plane rayline bending and (2) severe overestimations of opacity caused by rays bending out of the fan-beam plane and missing the detector array. Although the ISG filtering routine mitigated these errors better than an adaptive-mean filtering routine, it caused unacceptable losses in spatial resolution.
A Dynamic Attitude Measurement System Based on LINS
Li, Hanzhou; Pan, Quan; Wang, Xiaoxu; Zhang, Juanni; Li, Jiang; Jiang, Xiangjun
2014-01-01
A dynamic attitude measurement system (DAMS) is developed based on a laser inertial navigation system (LINS). Three factors of the dynamic attitude measurement error using LINS are analyzed: dynamic error, time synchronization and phase lag. An optimal coning errors compensation algorithm is used to reduce coning errors, and two-axis wobbling verification experiments are presented in the paper. The tests indicate that the attitude accuracy is improved 2-fold by the algorithm. In order to decrease coning errors further, the attitude updating frequency is improved from 200 Hz to 2000 Hz. At the same time, a novel finite impulse response (FIR) filter with three notches is designed to filter the dither frequency of the ring laser gyro (RLG). The comparison tests suggest that the new filter is five times more effective than the old one. The paper indicates that phase-frequency characteristics of FIR filter and first-order holder of navigation computer constitute the main sources of phase lag in LINS. A formula to calculate the LINS attitude phase lag is introduced in the paper. The expressions of dynamic attitude errors induced by phase lag are derived. The paper proposes a novel synchronization mechanism that is able to simultaneously solve the problems of dynamic test synchronization and phase compensation. A single-axis turntable and a laser interferometer are applied to verify the synchronization mechanism. The experiments results show that the theoretically calculated values of phase lag and attitude error induced by phase lag can both match perfectly with testing data. The block diagram of DAMS and physical photos are presented in the paper. The final experiments demonstrate that the real-time attitude measurement accuracy of DAMS can reach up to 20″ (1σ) and the synchronization error is less than 0.2 ms on the condition of three axes wobbling for 10 min. PMID:25177802
A steep peripheral ring in irregular cornea topography, real or an instrument error?
Galindo-Ferreiro, Alicia; Galvez-Ruiz, Alberto; Schellini, Silvana A; Galindo-Alonso, Julio
2016-01-01
To demonstrate that the steep peripheral ring (red zone) on corneal topography after myopic laser in situ keratomileusis (LASIK) could possibly due to instrument error and not always to a real increase in corneal curvature. A spherical model for the corneal surface and modifying topography software was used to analyze the cause of an error due to instrument design. This study involved modification of the software of a commercially available topographer. A small modification of the topography image results in a red zone on the corneal topography color map. Corneal modeling indicates that the red zone could be an artifact due to an instrument-induced error. The steep curvature changes after LASIK, signified by the red zone, could be also an error due to the plotting algorithms of the corneal topographer, besides a steep curvature change.
Study on optical 3D angular deformations measurement
NASA Astrophysics Data System (ADS)
Gao, Yang; Wang, Xingshu; Huang, Zongsheng; Yang, Jinliang
2013-12-01
3D angular deformations will be inevitable when ships are sailing, due to the changes of the environmental temperature and external stresses. The measurement of 3D angular deformations is one of the most critical and difficult issues in navy and shipbuilding industry around the world. In this paper, we propose an optical method to measure 3D ship angular deformations and discuss the measurement errors in detail. Theoretical analysis shows that the measured errors of the pitching and yawing deformations are induced by the installation errors of the image aperture, and the measured error of the rolling deformation depends on the subpixel location algorithm in image processing. It indicates that the measured errors of the optical measurement proposed in this paper are at the magnitude of angular seconds, when the elaborated installation and precise image processing technology are both performed.
NASA Astrophysics Data System (ADS)
Debchoudhury, Shantanab; Earle, Gregory
2017-04-01
Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.
Learning-Induced Plasticity in Medial Prefrontal Cortex Predicts Preference Malleability
Garvert, Mona M.; Moutoussis, Michael; Kurth-Nelson, Zeb; Behrens, Timothy E.J.; Dolan, Raymond J.
2015-01-01
Summary Learning induces plasticity in neuronal networks. As neuronal populations contribute to multiple representations, we reasoned plasticity in one representation might influence others. We used human fMRI repetition suppression to show that plasticity induced by learning another individual’s values impacts upon a value representation for oneself in medial prefrontal cortex (mPFC), a plasticity also evident behaviorally in a preference shift. We show this plasticity is driven by a striatal “prediction error,” signaling the discrepancy between the other’s choice and a subject’s own preferences. Thus, our data highlight that mPFC encodes agent-independent representations of subjective value, such that prediction errors simultaneously update multiple agents’ value representations. As the resulting change in representational similarity predicts interindividual differences in the malleability of subjective preferences, our findings shed mechanistic light on complex human processes such as the powerful influence of social interaction on beliefs and preferences. PMID:25611512
Tabernero, Juan; Vazquez, Daniel; Seidemann, Anne; Uttenweiler, Dietmar; Schaeffel, Frank
2009-08-01
The recent observation that central refractive development might be controlled by the refractive errors in the periphery, also in primates, revived the interest in the peripheral optics of the eye. We optimized an eccentric photorefractor to measure the peripheral refractive error in the vertical pupil meridian over the horizontal visual field (from -45 degrees to 45 degrees ), with and without myopic spectacle correction. Furthermore, a newly designed radial refractive gradient lens (RRG lens) that induces increasing myopia in all radial directions from the center was tested. We found that for the geometry of our measurement setup conventional spectacles induced significant relative hyperopia in the periphery, although its magnitude varied greatly among different spectacle designs and subjects. In contrast, the newly designed RRG lens induced relative peripheral myopia. These results are of interest to analyze the effect that different optical corrections might have on the emmetropization process.
Development and Characterization of a Low-Pressure Calibration System for Hypersonic Wind Tunnels
NASA Technical Reports Server (NTRS)
Green, Del L.; Everhart, Joel L.; Rhode, Matthew N.
2004-01-01
Minimization of uncertainty is essential for accurate ESP measurements at very low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources requires a well defined and controlled calibration method. A calibration system has been constructed and environmental control software developed to control experimentation to eliminate human induced error sources. The initial stability study of the calibration system shows a high degree of measurement accuracy and precision in temperature and pressure control. Control manometer drift and reference pressure instabilities induce uncertainty into the repeatability of voltage responses measured from the PSI System 8400 between calibrations. Methods of improving repeatability are possible through software programming and further experimentation.
Reduction of the Misinformation Effect by Arousal Induced after Learning
ERIC Educational Resources Information Center
English, Shaun M.; Nielson, Kristy A.
2010-01-01
Misinformation introduced after events have already occurred causes errors in later retrieval. Based on literature showing that arousal induced after learning enhances delayed retrieval, we investigated whether post-learning arousal can reduce the misinformation effect. 251 participants viewed four short film clips, each followed by a retention…
Foley, Mary Ann; Foy, Jeffrey; Schlemmer, Emily; Belser-Ehrlich, Janna
2010-11-01
Imagery encoding effects on source-monitoring errors were explored using the Deese-Roediger-McDermott paradigm in two experiments. While viewing thematically related lists embedded in mixed picture/word presentations, participants were asked to generate images of objects or words (Experiment 1) or to simply name the items (Experiment 2). An encoding task intended to induce spontaneous images served as a control for the explicit imagery instruction conditions (Experiment 1). On the picture/word source-monitoring tests, participants were much more likely to report "seeing" a picture of an item presented as a word than the converse particularly when images were induced spontaneously. However, this picture misattribution error was reversed after generating images of words (Experiment 1) and was eliminated after simply labelling the items (Experiment 2). Thus source misattributions were sensitive to the processes giving rise to imagery experiences (spontaneous vs deliberate), the kinds of images generated (object vs word images), and the ways in which materials were presented (as pictures vs words).
An adaptive optics system for solid-state laser systems used in inertial confinement fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salmon, J.T.; Bliss, E.S.; Byrd, J.L.
1995-09-17
Using adaptive optics the authors have obtained nearly diffraction-limited 5 kJ, 3 nsec output pulses at 1.053 {micro}m from the Beamlet demonstration system for the National Ignition Facility (NIF). The peak Strehl ratio was improved from 0.009 to 0.50, as estimated from measured wavefront errors. They have also measured the relaxation of the thermally induced aberrations in the main beam line over a period of 4.5 hours. Peak-to-valley aberrations range from 6.8 waves at 1.053 {micro}m within 30 minutes after a full system shot to 3.9 waves after 4.5 hours. The adaptive optics system must have enough range to correctmore » accumulated thermal aberrations from several shots in addition to the immediate shot-induced error. Accumulated wavefront errors in the beam line will affect both the design of the adaptive optics system for NIF and the performance of that system.« less
Justification of Estimates for Fiscal Year 1983 Submitted to Congress.
1982-02-01
hierarchies to aid software production; completion of the components of an adaptive suspension vehicle including a storage energy unit, hydraulics, laser...and corrosion (long storage times), and radiation-induced breakdown. Solid- lubricated main engine bearings for cruise missile engines would offer...environments will cause "soft error" (computational and memory storage errors) in advanced microelectronic circuits. Research on high-speed, low-power
Comment on "Infants' perseverative search errors are induced by pragmatic misinterpretation".
Spencer, John P; Dineva, Evelina; Smith, Linda B
2009-09-25
Topál et al. (Reports, 26 September 2008, p. 1831) proposed that infants' perseverative search errors can be explained by ostensive cues from the experimenter. We use the dynamic field theory to test the proposal that infants encode locations more weakly when social cues are present. Quantitative simulations show that this account explains infants' performance without recourse to the theory of natural pedagogy.
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)
2002-01-01
One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.
Porous plug for reducing orifice induced pressure error in airfoils
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B. (Inventor); Gloss, Blair B. (Inventor); Eves, John W. (Inventor); Stack, John P. (Inventor)
1988-01-01
A porous plug is provided for the reduction or elimination of positive error caused by the orifice during static pressure measurements of airfoils. The porous plug is press fitted into the orifice, thereby preventing the error caused either by fluid flow turning into the exposed orifice or by the fluid flow stagnating at the downstream edge of the orifice. In addition, the porous plug is made flush with the outer surface of the airfoil, by filing and polishing, to provide a smooth surface which alleviates the error caused by imperfections in the orifice. The porous plug is preferably made of sintered metal, which allows air to pass through the pores, so that the static pressure measurements can be made by remote transducers.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
Cleared for the visual approach: Human factor problems in air carrier operations
NASA Technical Reports Server (NTRS)
Monan, W. P.
1983-01-01
The study described herein, a set of 353 ASRS reports of unique aviation occurrences significantly involving visual approaches was examined to identify hazards and pitfalls embedded in the visual approach procedure and to consider operational practices that might help avoid future mishaps. Analysis of the report set identified nine aspects of the visual approach procedure that appeared to be predisposing conditions for inducing or exacerbating the effects of operational errors by flight crew members or controllers. Predisposing conditions, errors, and operational consequences of the errors are discussed. In a summary, operational policies that might mitigate the problems are examined.
Liu, Yan; Wang, Yuexin; Lv, Huibin; Jiang, Xiaodan; Zhang, Mingzhou; Li, Xuemin
2017-01-01
To investigate the efficacy of α-adrenergic agonist brimonidine either alone or combined with pirenzepine for inhibiting progressing myopia in guinea pig lens-myopia-induced models. Thirty-six guinea pigs were randomly divided into six groups: Group A received 2% pirenzepine, Group B received 0.2% brimonidine, Group C received 0.1% brimonidine, Group D received 2% pirenzepine + 0.2% brimonidine, Group E received 2% pirenzepine + 0.1% brimonidine, and Group F received the medium. Myopia was induced in the right eyes of all guinea pigs using polymethyl methacrylate (PMMA) lenses for 3 weeks. Eye drops were administered accordingly. Intraocular pressure was measured every day. Refractive error and axial length measurements were performed once a week. The enucleated eyeballs were removed for hematoxylin and eosin (H&E) and Van Gieson (VG) staining at the end of the study. The lens-induced myopia model was established after 3 weeks. Treatment with 0.1% brimonidine alone and 0.2% brimonidine alone was capable of inhibiting progressing myopia, as shown by the better refractive error (p=0.024; p=0.006) and shorter axial length (p=0.005; p=0.0017). Treatment with 0.1% brimonidine and 0.2% brimonidine combined with 2% pirenzepine was also effective in suppressing progressing refractive error (p=0.016; p=0.0006) and axial length (p=0.017; p=0.0004). The thickness of the sclera was kept stable in all groups except group F; the sclera was much thinner in the lens-induced myopia eyes compared to the control eyes. Treatment with 0.1% brimonidine alone and 0.2% brimonidine alone, as well as combined with 2% pirenzepine, was effective in inhibiting progressing myopia. The result indicates that intraocular pressure elevation is possibly a promising mechanism and potential treatment for progressing myopia.
DNA replication error-induced extinction of diploid yeast.
Herr, Alan J; Kennedy, Scott R; Knowels, Gary M; Schultz, Eric M; Preston, Bradley D
2014-03-01
Genetic defects in DNA polymerase accuracy, proofreading, or mismatch repair (MMR) induce mutator phenotypes that accelerate adaptation of microbes and tumor cells. Certain combinations of mutator alleles synergistically increase mutation rates to levels that drive extinction of haploid cells. The maximum tolerated mutation rate of diploid cells is unknown. Here, we define the threshold for replication error-induced extinction (EEX) of diploid Saccharomyces cerevisiae. Double-mutant pol3 alleles that carry mutations for defective DNA polymerase-δ proofreading (pol3-01) and accuracy (pol3-L612M or pol3-L612G) induce strong mutator phenotypes in heterozygous diploids (POL3/pol3-01,L612M or POL3/pol3-01,L612G). Both pol3-01,L612M and pol3-01,L612G alleles are lethal in the homozygous state; cells with pol3-01,L612M divide up to 10 times before arresting at random stages in the cell cycle. Antimutator eex mutations in the pol3 alleles suppress this lethality (pol3-01,L612M,eex or pol3-01,L612G,eex). MMR defects synergize with pol3-01,L612M,eex and pol3-01,L612G,eex alleles, increasing mutation rates and impairing growth. Conversely, inactivation of the Dun1 S-phase checkpoint kinase suppresses strong pol3-01,L612M,eex and pol3-01,L612G,eex mutator phenotypes as well as the lethal pol3-01,L612M phenotype. Our results reveal that the lethal error threshold in diploids is 10 times higher than in haploids and likely determined by homozygous inactivation of essential genes. Pronounced loss of fitness occurs at mutation rates well below the lethal threshold, suggesting that mutator-driven cancers may be susceptible to drugs that exacerbate replication errors.
Nakayama, Masataka; Saito, Satoru
2015-08-01
The present study investigated principles of phonological planning, a common serial ordering mechanism for speech production and phonological short-term memory. Nakayama and Saito (2014) have investigated the principles by using a speech-error induction technique, in which participants were exposed to an auditory distracIor word immediately before an utterance of a target word. They demonstrated within-word adjacent mora exchanges and serial position effects on error rates. These findings support, respectively, the temporal distance and the edge principles at a within-word level. As this previous study induced errors using word distractors created by exchanging adjacent morae in the target words, it is possible that the speech errors are expressions of lexical intrusions reflecting interactive activation of phonological and lexical/semantic representations. To eliminate this possibility, the present study used nonword distractors that had no lexical or semantic representations. This approach successfully replicated the error patterns identified in the abovementioned study, further confirming that the temporal distance and edge principles are organizing precepts in phonological planning.
Generalized site occupancy models allowing for false positive and false negative errors
Royle, J. Andrew; Link, W.A.
2006-01-01
Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saeki, Hiroshi, E-mail: saeki@spring8.or.jp; Magome, Tamotsu, E-mail: saeki@spring8.or.jp
2014-10-06
To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method wasmore » approximately less than several percent in the pressure range from 10{sup −5} Pa to 10{sup −8} Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.« less
A New Methodology for Vibration Error Compensation of Optical Encoders
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new “ad hoc” methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained. PMID:22666067
Error-tradeoff and error-disturbance relations for incompatible quantum measurements.
Branciard, Cyril
2013-04-23
Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario.
NASA Technical Reports Server (NTRS)
Thurman, Sam W.; Estefan, Jeffrey A.
1991-01-01
Approximate analytical models are developed and used to construct an error covariance analysis for investigating the range of orbit determination accuracies which might be achieved for typical Mars approach trajectories. The sensitivity or orbit determination accuracy to beacon/orbiter position errors and to small spacecraft force modeling errors is also investigated. The results indicate that the orbit determination performance obtained from both Doppler and range data is a strong function of the inclination of the approach trajectory to the Martian equator, for surface beacons, and for orbiters, the inclination relative to the orbital plane. Large variations in performance were also observed for different approach velocity magnitudes; Doppler data in particular were found to perform poorly in determining the downtrack (along the direction of flight) component of spacecraft position. In addition, it was found that small spacecraft acceleration modeling errors can induce large errors in the Doppler-derived downtrack position estimate.
NASA Astrophysics Data System (ADS)
Liu, Wei; Sneeuw, Nico; Jiang, Weiping
2017-04-01
GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.
García-García, Isabel; Zeighami, Yashar; Dagher, Alain
2017-06-01
Surprises are important sources of learning. Cognitive scientists often refer to surprises as "reward prediction errors," a parameter that captures discrepancies between expectations and actual outcomes. Here, we integrate neurophysiological and functional magnetic resonance imaging (fMRI) results addressing the processing of reward prediction errors and how they might be altered in drug addiction and Parkinson's disease. By increasing phasic dopamine responses, drugs might accentuate prediction error signals, causing increases in fMRI activity in mesolimbic areas in response to drugs. Chronic substance dependence, by contrast, has been linked with compromised dopaminergic function, which might be associated with blunted fMRI responses to pleasant non-drug stimuli in mesocorticolimbic areas. In Parkinson's disease, dopamine replacement therapies seem to induce impairments in learning from negative outcomes. The present review provides a holistic overview of reward prediction errors across different pathologies and might inform future clinical strategies targeting impulsive/compulsive disorders.
Using a Commercial Ethernet PHY Device in a Radiation Environment
NASA Technical Reports Server (NTRS)
Parks, Jeremy; Arani, Michael; Arroyo, Roberto
2014-01-01
This work involved placing a commercial Ethernet PHY on its own power boundary, with limited current supply, and providing detection methods to determine when the device is not operating and when it needs either a reset or power-cycle. The device must be radiation-tested and free of destructive latchup errors. The commercial Ethernet PHY's own power boundary must be supplied by a current-limited power regulator that must have an enable (for power cycling), and its maximum power output must not exceed the PHY's input requirements, thus preventing damage to the device. A regulator with configurable output limits and short-circuit protection (such as the RHFL4913, rad hard positive voltage regulator family) is ideal. This will prevent a catastrophic failure due to radiation (such as a short between the commercial device's power and ground) from taking down the board's main power. Logic provided on the board will detect errors in the PHY. An FPGA (field-programmable gate array) with embedded Ethernet MAC (Media Access Control) will work well. The error detection includes monitoring the PHY's interrupt line, and the status of the Ethernet's switched power. When the PHY is determined to be non-functional, the logic device resets the PHY, which will often clear radiation induced errors. If this doesn't work, the logic device power-cycles the FPGA by toggling the regulator's enable input. This should clear almost all radiation induced errors provided the device is not latched up.
Reliability of Memories Protected by Multibit Error Correction Codes Against MBUs
NASA Astrophysics Data System (ADS)
Ming, Zhu; Yi, Xiao Li; Chang, Liu; Wei, Zhang Jian
2011-02-01
As technology scales, more and more memory cells can be placed in a die. Therefore, the probability that a single event induces multiple bit upsets (MBUs) in adjacent memory cells gets greater. Generally, multibit error correction codes (MECCs) are effective approaches to mitigate MBUs in memories. In order to evaluate the robustness of protected memories, reliability models have been widely studied nowadays. Instead of irradiation experiments, the models can be used to quickly evaluate the reliability of memories in the early design. To build an accurate model, some situations should be considered. Firstly, when MBUs are presented in memories, the errors induced by several events may overlap each other, which is more frequent than single event upset (SEU) case. Furthermore, radiation experiments show that the probability of MBUs strongly depends on angles of the radiation event. However, reliability models which consider the overlap of multiple bit errors and angles of radiation event have not been proposed in the present literature. In this paper, a more accurate model of memories with MECCs is presented. Both the overlap of multiple bit errors and angles of event are considered in the model, which produces a more precise analysis in the calculation of mean time to failure (MTTF) for memory systems under MBUs. In addition, memories with scrubbing and nonscrubbing are analyzed in the proposed model. Finally, we evaluate the reliability of memories under MBUs in Matlab. The simulation results verify the validity of the proposed model.
Engineering evaluations and studies. Report for IUS studies
NASA Technical Reports Server (NTRS)
1981-01-01
The reviews, investigations, and analyses of the Inertial Upper Stage (IUS) Spacecraft Tracking and Data Network (STDN) transponder are reviewed. Carrier lock detector performance for Tracking and Data Relay Satellite System (TDRSS) dual-mode operation is discussed, as is the problem of predicting instantaneous frequency error in the carrier loop. Coastal loop performance analysis is critiqued and the static tracking phase error induced by thermal noise biases is discussed.
Adaptive Harmonic Balance Method for Unsteady, Nonlinear, One-Dimensional Periodic Flows
2002-09-01
Design and Implemen- tation. May 1999. REF-2 23. Toro , Eleuterio F . Fiemann Solvers and Numerical Methods for Fluid Dynamics, chapter 15. New York...prominent for high-frequency unsteady-flows. Experimental Analysis of Splitting-induced Error To assess the actual effect of splitting error on a...VITA-1 vi List of Figures Figure Page 1.1. Experimental Pressure Data on Inlet Guide Vane Upstream of Transonic Rotating
Nickla, Debora L; Sharda, Vandhana; Troilo, David
2005-04-01
In chicks, the temporal response characteristics to form deprivation and to spectacle lens wear (myopic and hyperopic defocus) show essential differences, suggesting that the emmetropization system "weights" the visual signals differently. To further explore how the eye integrates opposing visual signals, we examined the responses to myopic defocus induced by prior form deprivation vs. that induced by positive spectacle lenses, in both cases alternating with form deprivation. Three experimental paradigms were used: 1) Form deprivation was induced by monocular occluders for 7 days. Over the subsequent 7 days, the occluders were removed daily for 12 hours (n = 13), 4 hours (n = 7), 2 hours (n = 7), or 0 hours (n = 6). 2) Birds were form-deprived on day 12. Over the subsequent 7 days, occluders were replaced with a +10 D lens for 2 hours per day (n = 13). 3) Starting at day 11, a +10 D lens was placed over one eye for 2 hours (n = 13), 3 hours (n = 5), or 6 hours (n = 10) per day and were otherwise untreated. Ocular dimensions were measured with high-frequency A-scan ultrasonography; refractive errors were measured by streak retinoscopy at various intervals. In recovering eyes, 2 hours per day of myopic defocus was as effective as 12 hours at inducing refractive and axial recovery (change in refractive error: +10 D vs. +13 D, respectively). By contrast, 2 hours of lens-induced defocus (alternating with form deprivation) was not sufficient to induce refractive or axial compensation (change in refractive error: -1.7 D). When myopic defocus alternated with unrestricted vision, 6 hours per day were sufficient to induce nearly full compensation (2 hours vs. 6 hours: 4.4 D vs. 8.2 D; p < 0.0005). Choroids showed rapid increases in thickness to the daily episodes of myopic defocus; these resulted in "long-term" thickness changes in recovering eyes and eyes wearing lenses for 3 or 6 hours per day. The response to myopic defocus induced by prior form deprivation is more robust than the response induced by positive lenses, suggesting that the underlying mechanisms differ. Presumably, this difference is related to the size of the eye at the onset. Compensatory decreases in growth rate occur without full compensatory choroidal thickening.
Functional Language Shift to the Right Hemisphere in Patients with Language-Eloquent Brain Tumors
Krieg, Sandro M.; Sollmann, Nico; Hauck, Theresa; Ille, Sebastian; Foerschler, Annette; Meyer, Bernhard; Ringel, Florian
2013-01-01
Objectives Language function is mainly located within the left hemisphere of the brain, especially in right-handed subjects. However, functional MRI (fMRI) has demonstrated changes of language organization in patients with left-sided perisylvian lesions to the right hemisphere. Because intracerebral lesions can impair fMRI, this study was designed to investigate human language plasticity with a virtual lesion model using repetitive navigated transcranial magnetic stimulation (rTMS). Experimental design Fifteen patients with lesions of left-sided language-eloquent brain areas and 50 healthy and purely right-handed participants underwent bilateral rTMS language mapping via an object-naming task. All patients were proven to have left-sided language function during awake surgery. The rTMS-induced language errors were categorized into 6 different error types. The error ratio (induced errors/number of stimulations) was determined for each brain region on both hemispheres. A hemispheric dominance ratio was then defined for each region as the quotient of the error ratio (left/right) of the corresponding area of both hemispheres (ratio >1 = left dominant; ratio <1 = right dominant). Results Patients with language-eloquent lesions showed a statistically significantly lower ratio than healthy participants concerning “all errors” and “all errors without hesitations”, which indicates a higher participation of the right hemisphere in language function. Yet, there was no cortical region with pronounced difference in language dominance compared to the whole hemisphere. Conclusions This is the first study that shows by means of an anatomically accurate virtual lesion model that a shift of language function to the non-dominant hemisphere can occur. PMID:24069410
Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.
Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W
2017-06-22
Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.
Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback
Lee, Jackson C.; Mittelman, Talia; Stepp, Cara E.; Bohland, Jason W.
2017-01-01
Purpose Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Method Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. Results New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. Conclusions This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. Supplemental Material https://doi.org/10.23641/asha.5103067 PMID:28655038
Anti-retroviral therapy-induced status epilepticus in "pseudo-HIV serodeconversion".
Etgen, Thorleif; Eberl, Bernhard; Freudenberger, Thomas
2010-01-01
Diligence in the interpretation of results is essential as information gained from the psychiatric patient's history might often be restricted. Nonobservance of established guidelines may lead to a wrong diagnosis, induce a false therapy and result in life-threatening situations. Communication errors between hospitals and doctors and uncritical acceptance of prior diagnoses add substantially to this problem. We present a patient with alcohol-related dementia who received anti-retroviral therapy that promoted a non-convulsive status epilepticus. HIV serodeconversion was considered after our laboratory result yielded a HIV-negative status. Critical review of previous diagnostic investigations revealed several errors in the diagnosis of HIV infection leading to a "pseudo-serodeconversion." Finally, anti-retroviral therapy could be discontinued. Copyright © 2010 Elsevier Inc. All rights reserved.
A Variational Formulation of Macro-Particle Algorithms for Kinetic Plasma Simulations
NASA Astrophysics Data System (ADS)
Shadwick, B. A.
2013-10-01
Macro-particle based simulations methods are in widespread use in plasma physics; their computational efficiency and intuitive nature are largely responsible for their longevity. In the main, these algorithms are formulated by approximating the continuous equations of motion. For systems governed by a variational principle (such as collisionless plasmas), approximations of the equations of motion is known to introduce anomalous behavior, especially in system invariants. We present a variational formulation of particle algorithms for plasma simulation based on a reduction of the distribution function onto a finite collection of macro-particles. As in the usual Particle-In-Cell (PIC) formulation, these macro-particles have a definite momentum and are spatially extended. The primary advantage of this approach is the preservation of the link between symmetries and conservation laws. For example, nothing in the reduction introduces explicit time dependence to the system and, therefore, the continuous-time equations of motion exactly conserve energy; thus, these models are free of grid-heating. In addition, the variational formulation allows for constructing models of arbitrary spatial and temporal order. In contrast, the overall accuracy of the usual PIC algorithm is at most second due to the nature of the force interpolation between the gridded field quantities and the (continuous) particle position. Again in contrast to the usual PIC algorithm, here the macro-particle shape is arbitrary; the spatial extent is completely decoupled from both the grid-size and the ``smoothness'' of the shape; smoother particle shapes are not necessarily larger. For simplicity, we restrict our discussion to one-dimensional, non-relativistic, un-magnetized, electrostatic plasmas. We comment on the extension to the electromagnetic case. Supported by the US DoE under contract numbers DE-FG02-08ER55000 and DE-SC0008382.
Using heuristic evaluations to assess the safety of health information systems.
Carvalho, Christopher J; Borycki, Elizabeth M; Kushniruk, Andre W
2009-01-01
Health information systems (HISs) are typically seen as a mechanism for reducing medical errors. There is, however, evidence to prove that technology may actually be the cause of errors. As a result, it is crucial to fully test any system prior to its implementation. At present, evidence-based evaluation heuristics do not exist for assessing aspects of interface design that lead to medical errors. A three phase study was conducted to develop evidence-based heuristics for evaluating interfaces. Phase 1 consisted of a systematic review of the literature. In Phase 2 a comprehensive list of 33 evaluation heuristics was developed based on the review that could be used to test for potential technology induced errors. Phase 3 involved applying these healthcare specific heuristics to evaluate a HIS.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Atmospheric Dispersion Effects in Weak Lensing Measurements
Plazas, Andrés Alejandro; Bernstein, Gary
2012-10-01
The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed themore » statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.« less
Error monitoring issues for common channel signaling
NASA Astrophysics Data System (ADS)
Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.
1994-04-01
Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.
NASA Astrophysics Data System (ADS)
Lin, Xiaomei; Chang, Penghui; Chen, Gehua; Lin, Jingjun; Liu, Ruixiang; Yang, Hao
2015-11-01
Our recent work has determined the carbon content in a melting ferroalloy by laser-induced breakdown spectroscopy (LIBS). The emission spectrum of carbon that we obtained in the laboratory is suitable for carbon content determination in a melting ferroalloy but we cannot get the expected results when this method is applied in industrial conditions: there is always an unacceptable error of around 4% between the actual value and the measured value. By comparing the measurement condition in the industrial condition with that in the laboratory, the results show that the temperature of the molten ferroalloy samples to be measured is constant under laboratory conditions while it decreases gradually under industrial conditions. However, temperature has a considerable impact on the measurement of carbon content, and this is the reason why there is always an error between the actual value and the measured value. In this paper we compare the errors of carbon content determination at different temperatures to find the optimum reference temperature range which can fit the requirements better in industrial conditions and, hence, make the measurement more accurate. The results of the comparative analyses show that the measured value of the carbon content in molten state (1620 K) is consistent with the nominal value of the solid standard sample (error within 0.7%). In fact, it is the most accurate measurement in the solid state. Based on this, we can effectively improve the accuracy of measurements in laboratory and can provide a reference standard of temperature for the measurement in industrial conditions. supported by National Natural Science Foundation of China (No. 51374040), and supported by Laser-Induced Plasma Spectroscopy Equipment Development and Application, China (No. 2014YQ120351)
Fisseni, Gregor; Pentzek, Michael; Abholz, Heinz-Harald
2008-02-01
GPs' recollections about their 'most serious errors in treatment' and about the consequences for themselves. Does it make a difference, who (else) contributed to the error, or to its discovery or disclosure? Anonymous questionnaire study concerning the 'three most serious errors in your career as a GP'. The participating doctors were given an operational definition of 'serious error'. They applied a special recall technique, using patient-induced associations to bring to mind former 'serious errors'. The recall method and the semi-structured 25-item questionnaire used were developed and piloted by the authors. The items were analysed quantitatively and by qualitative content analysis. General practices in the North Rhine region in Germany: 32 GPs anonymously reported about 75 'most serious errors'. In more than half of the cases analysed, other people contributed considerably to the GPs' serious errors. Most of the errors were discovered and disclosed to the patient by doctors: either by the GPs themselves, or by colleagues. A lot of GPs suffered loss of reputation and loss of patients. However, the number of patients staying with their GP clearly exceeded the number leaving their GP, depending on who else contributed to the error, who discovered it and who disclosed it to the patient. The majority of patients still trusted their GP after a serious error, especially if the GP was not the only one who contributed to the error and if the GP played an active role in the discovery and disclosure or the error.
Error-Induced Learning as a Resource-Adaptive Process in Young and Elderly Individuals
NASA Astrophysics Data System (ADS)
Ferdinand, Nicola K.; Weiten, Anja; Mecklinger, Axel; Kray, Jutta
Thorndike described in his law of effect [44] that actions followed by positive events are more likely to be repeated in the future, whereas actions that are followed by negative outcomes are less likely to be repeated. This implies that behavior is evaluated in the light of its potential consequences, and non-reward events (i.e., errors) must be detected for reinforcement learning to take place. In short, humans have to monitor their performance in order to detect and correct errors, and this allows them to successfully adapt their behavior to changing environmental demands and acquire new behavior, i.e., to learn.
Relative peripheral hyperopic defocus alters central refractive development in infant monkeys
Smith, Earl L.; Hung, Li-Fang; Huang, Juan
2009-01-01
Understanding the role of peripheral defocus on central refractive development is critical because refractive errors can vary significantly with eccentricity and peripheral refractions have been implicated in the genesis of central refractive errors in humans. Two rearing strategies were used to determine whether peripheral hyperopia alters central refractive development in rhesus monkeys. In intact eyes, lens-induced relative peripheral hyperopia produced central axial myopia. Moreover, eliminating the fovea by laser photoablation did not prevent compensating myopic changes in response to optically imposed hyperopia. These results show that peripheral refractive errors can have a substantial impact on central refractive development in primates. PMID:19632261
MUSIC: MUlti-Scale Initial Conditions
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Abel, Tom
2013-11-01
MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.
Phenomenology of the sound-induced flash illusion.
Abadi, Richard V; Murphy, Jonathan S
2014-07-01
Past studies, using pairings of auditory tones and visual flashes, which were static and coincident in space but variable in time, demonstrated errors in judging the temporal patterning of the visual flashes-the sound-induced flash illusion. These errors took one of the two forms: under-reporting (sound-induced fusion) or over-reporting (sound-induced fission) of the flash numbers. Our study had three objectives: to examine the robustness of both illusions and to consider the effects of stimulus set and response bias. To this end, we used an extended range of fixed spatial location flash-tone pairings, examined stimuli that were variable in space and time and measured confidence in judging flash numbers. Our results indicated that the sound-induced flash illusion is a robust percept, a finding underpinned by the confidence measures. Sound-induced fusion was found to be more robust than sound-induced fission and a most likely outcome when high numbers of flashes were incorporated within an incongruent flash-tone pairing. Conversely, sound-induced fission was the most likely outcome for the flash-tone pairing which contained two flashes. Fission was also shown to be strongly driven by stimuli confounds such as categorical boundary conditions (e.g. flash-tone pairings with ≤2 flashes) and compressed response options. These findings suggest whilst both fission and fusion are associated with 'auditory driving', the differences in the occurrence and strength of the two illusions not only reflect the separate neuronal mechanisms underlying audio and visual signal processing, but also the test conditions that have been used to investigate the sound-induced flash illusion.
USDA-ARS?s Scientific Manuscript database
Since oxygen (O2) absorption of light becomes more pronounced at higher pressure levels, even a few meters distance between the target and the sensor can strongly affect canopy leaving Solar-Induced chlorophyll Fluorescence (SIF) retrievals. This study was conducted to quantify the consequent error ...
NASA Astrophysics Data System (ADS)
Saha, Subhajit; Mondal, Anindita
2018-04-01
We would like to rectify an error regarding the validity of the first law of thermodynamics (FLT) on the apparent horizon of a spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) universe for the gravitationally induced particle creation scenario with constant specific entropy and an arbitrary particle creation rate (see Sect. 3.1 of original article)
Post-error Brain Activity Correlates With Incidental Memory for Negative Words
Senderecka, Magdalena; Ociepka, Michał; Matyjek, Magdalena; Kroczek, Bartłomiej
2018-01-01
The present study had three main objectives. First, we aimed to evaluate whether short-duration affective states induced by negative and positive words can lead to increased error-monitoring activity relative to a neutral task condition. Second, we intended to determine whether such an enhancement is limited to words of specific valence or is a general response to arousing material. Third, we wanted to assess whether post-error brain activity is associated with incidental memory for negative and/or positive words. Participants performed an emotional stop-signal task that required response inhibition to negative, positive or neutral nouns while EEG was recorded. Immediately after the completion of the task, they were instructed to recall as many of the presented words as they could in an unexpected free recall test. We observed significantly greater brain activity in the error-positivity (Pe) time window in both negative and positive trials. The error-related negativity amplitudes were comparable in both the neutral and emotional arousing trials, regardless of their valence. Regarding behavior, increased processing of emotional words was reflected in better incidental recall. Importantly, the memory performance for negative words was positively correlated with the Pe amplitude, particularly in the negative condition. The source localization analysis revealed that the subsequent memory recall for negative words was associated with widespread bilateral brain activity in the dorsal anterior cingulate cortex and in the medial frontal gyrus, which was registered in the Pe time window during negative trials. The present study has several important conclusions. First, it indicates that the emotional enhancement of error monitoring, as reflected by the Pe amplitude, may be induced by stimuli with symbolic, ontogenetically learned emotional significance. Second, it indicates that the emotion-related enhancement of the Pe occurs across both negative and positive conditions, thus it is preferentially driven by the arousal content of an affective stimuli. Third, our findings suggest that enhanced error monitoring and facilitated recall of negative words may both reflect responsivity to negative events. More speculatively, they can also indicate that post-error activity of the medial prefrontal cortex may selectively support encoding for negative stimuli and contribute to their privileged access to memory. PMID:29867408
Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi
2015-10-01
The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system.
Kalman filtered MR temperature imaging for laser induced thermal therapies.
Fuentes, D; Yung, J; Hazle, J D; Weinberg, J S; Stafford, R J
2012-04-01
The feasibility of using a stochastic form of Pennes bioheat model within a 3-D finite element based Kalman filter (KF) algorithm is critically evaluated for the ability to provide temperature field estimates in the event of magnetic resonance temperature imaging (MRTI) data loss during laser induced thermal therapy (LITT). The ability to recover missing MRTI data was analyzed by systematically removing spatiotemporal information from a clinical MR-guided LITT procedure in human brain and comparing predictions in these regions to the original measurements. Performance was quantitatively evaluated in terms of a dimensionless L(2) (RMS) norm of the temperature error weighted by acquisition uncertainty. During periods of no data corruption, observed error histories demonstrate that the Kalman algorithm does not alter the high quality temperature measurement provided by MR thermal imaging. The KF-MRTI implementation considered is seen to predict the bioheat transfer with RMS error < 4 for a short period of time, ∆t < 10 s, until the data corruption subsides. In its present form, the KF-MRTI method currently fails to compensate for consecutive for consecutive time periods of data loss ∆t > 10 sec.
Adams, C N; Kattawar, G W
1993-08-20
We have developed a Monte Carlo program that is capable of calculating both the scalar and the Stokes vector radiances in an atmosphere-ocean system in a single computer run. The correlated sampling technique is used to compute radiance distributions for both the scalar and the Stokes vector formulations simultaneously, thus permitting a direct comparison of the errors induced. We show the effect of the volume-scattering phase function on the errors in radiance calculations when one neglects polarization effects. The model used in this study assumes a conservative Rayleigh-scattering atmosphere above a flat ocean. Within the ocean, the volume-scattering function (the first element in the Mueller matrix) is varied according to both a Henyey-Greenstein phase function, with asymmetry factors G = 0.0, 0.5, and 0.9, and also to a Rayleigh-scattering phase function. The remainder of the reduced Mueller matrix for the ocean is taken to be that for Rayleigh scattering, which is consistent with ocean water measurement.
The Earth isn't flat: The (large) influence of topography on geodetic fault slip imaging.
NASA Astrophysics Data System (ADS)
Thompson, T. B.; Meade, B. J.
2017-12-01
While earthquakes both occur near and generate steep topography, most geodetic slip inversions assume that the Earth's surface is flat. We have developed a new boundary element tool, Tectosaur, with the capability to study fault and earthquake problems including complex fault system geometries, topography, material property contrasts, and millions of elements. Using Tectosaur, we study the model error induced by neglecting topography in both idealized synthetic fault models and for the cases of the MW=7.3 Landers and MW=8.0 Wenchuan earthquakes. Near the steepest topography, we find the use of flat Earth dislocation models may induce errors of more than 100% in the inferred slip magnitude and rake. In particular, neglecting topographic effects leads to an inferred shallow slip deficit. Thus, we propose that the shallow slip deficit observed in several earthquakes may be an artefact resulting from the systematic use of elastic dislocation models assuming a flat Earth. Finally, using this study as an example, we emphasize the dangerous potential for forward model errors to be amplified by an order of magnitude in inverse problems.
Revision of laser-induced damage threshold evaluation from damage probability data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas
2013-04-15
In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less
Locked-mode avoidance and recovery without momentum input
NASA Astrophysics Data System (ADS)
Delgado-Aparicio, L.; Rice, J. E.; Wolfe, S.; Cziegler, I.; Gao, C.; Granetz, R.; Wukitch, S.; Terry, J.; Greenwald, M.; Sugiyama, L.; Hubbard, A.; Hugges, J.; Marmar, E.; Phillips, P.; Rowan, W.
2015-11-01
Error-field-induced locked-modes (LMs) have been studied in Alcator C-Mod at ITER-Bϕ, without NBI fueling and momentum input. Delay of the mode-onset and locked-mode recovery has been successfully obtained without external momentum input using Ion Cyclotron Resonance Heating (ICRH). The use of external heating in-sync with the error-field ramp-up resulted in a successful delay of the mode-onset when PICRH > 1 MW, which demonstrates the existence of a power threshold to ``unlock'' the mode; in the presence of an error field the L-mode discharge can transition into H-mode only when PICRH > 2 MW and at high densities, avoiding also the density pump-out. The effects of ion heating observed on unlocking the core plasma may be due to ICRH induced flows in the plasma boundary, or modifications of plasma profiles that changed the underlying turbulence. This work was performed under US DoE contracts including DE-FC02-99ER54512 and others at MIT, DE-FG03-96ER-54373 at University of Texas at Austin, and DE-AC02-09CH11466 at PPPL.
The Effects of Turbulence on Tthe Measurements of Five-Hole Probes
NASA Astrophysics Data System (ADS)
Diebold, Jeffrey Michael
The primary goals of this research were to quantify the effects of turbulence on the measurements of five-hole pressure probes (5HP) and to develop a model capable of predicting the response of a 5HP to turbulence. The five-hole pressure probe is a commonly used device in experimental fluid dynamics and aerodynamics. By measuring the pressure at the five pressure ports located on the tip of the probe it is possible to determine the total pressure, static pressure and the three components of velocity at a point in the flow. Previous research has demonstrated that the measurements of simple pressure probes such as Pitot probes are significantly influenced by the presence of turbulence. Turbulent velocity fluctuations contaminate the measurement of pressure due to the nonlinear relationship between pressure and velocity as well as the angular response characteristics of the probe. Despite our understanding of the effects of turbulence on Pitot and static pressure probes, relatively little is known about the influence of turbulence on five-hole probes. This study attempts to fill this gap in our knowledge by using advanced experimental techniques to quantify these turbulence-induced errors and by developing a novel method of predicting the response of a five-hole probe to turbulence. A few studies have attempted to quantify turbulence-induced errors in five-hole probe measurements but they were limited by their inability to accurately measure the total and static pressure in the turbulent flow. The current research utilizes a fast-response five-hole probe (FR5HP) in order to accurately quantify the effects of turbulence on different standard five-hole probes (Std5HP). The FR5HP is capable of measuring the instantaneous flowfield and unlike the Std5HP the FR5HP measurements are not contaminated by the turbulent velocity fluctuations. Measurements with the FR5HP and two different Std5HPs were acquired in the highly turbulent wakes of 2D and 3D cylinders in order to quantify the turbulence-induced errors in Std5HP measurements. The primary contribution of this work is the development and validation of a simulation method to predict the measurements of a Std5HP in an arbitrary turbulent flow. This simulation utilizes a statistical approach to estimating the pressure at each port on the tip of the probe. The angular response of the probe is modeled using experimental calibration data for each five-hole probe. The simulation method is validated against the experimental measurements of the Std5HPs, and then used to study the how the characteristics of the turbulent flowfield influence the measurements of the Std5HPs. It is shown that total pressure measured by a Std5HP is increased by axial velocity fluctuations but decreased by the transverse fluctuations. The static pressure was shown to be very sensitive to the transverse fluctuations while the axial fluctuations had a negligible effect. As with Pitot probes, the turbulence-induced errors in the Std5HPs measurements were dependent on both the properties of the turbulent flow and the geometry of the probe tip. It is then demonstrated that this simulation method can be used to correct the measurements of a Std5HP in a turbulent flow if the characteristics of the turbulence are known. Finally, it is demonstrated that turbulence-induced errors in Std5HP measurements can have a substantial effect on the determination of the profile and vortex-induced drag from measurements in the wake of a 3D body. The results showed that while the calculation of both drag components was influenced by turbulence-induced errors the largest effect was on the determination of vortex-induced drag.
Temporally-stable active precision mount for large optics.
Reinlein, Claudia; Damm, Christoph; Lange, Nicolas; Kamm, Andreas; Mohaupt, Matthias; Brady, Aoife; Goy, Matthias; Leonhard, Nina; Eberhardt, Ramona; Zeitner, Uwe; Tünnermann, Andreas
2016-06-13
We present a temporally-stable active mount to compensate for manufacturing-induced deformations of reflective optical components. In this paper, we introduce the design of the active mount, and its evaluation results for two sample mirrors: a quarter mirror of 115 × 105 × 9 mm3, and a full mirror of 228 × 210 × 9 mm3. The quarter mirror with 20 actuators shows a best wavefront error rms of 10 nm. Its installation position depending deformations are addressed by long-time measurements over 14 weeks indicating no significance of the orientation. Size-induced differences of the mount are studied by a full mirror with 80 manual actuators arranged in the same actuator pattern as the quarter mirror. This sample shows a wavefront error rms of (27±2) nm over a measurement period of 46 days. We conclude that the developed mount is suitable to compensate for manufacturing-induced deformations of large reflective optics, and likely to be included in the overall systems alignment procedure.
Inducible error-prone repair in B. subtilis. Final report, September 1, 1979-June 30, 1981
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yasbin, R. E.
1981-06-01
The research performed under this contract has been concentrated on the relationship between inducible DNA repair systems, mutagenesis and the competent state in the gram positive bacterium Bacillus subtilis. The following results have been obtained from this research: (1) competent Bacillus subtilis cells have been developed into a sensitive tester system for carcinogens; (2) competent B. subtilis cells have an efficient excision-repair system, however, this system will not function on bacteriophage DNA taken into the cell via the process of transfection; (3) DNA polymerase III is essential in the mechanism of the process of W-reactivation; (4) B. subtilis strains curedmore » of their defective prophages have been isolated and are now being developed for gene cloning systems; (5) protoplasts of B. subtilis have been shown capable of acquiring DNA repair enzymes (i.e., enzyme therapy); and (6) a plasmid was characterized which enhanced inducible error-prone repair in a gram positive organism.« less
Experimental investigation of observation error in anuran call surveys
McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.
2010-01-01
Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.
Recent Earthquakes Mark the Onset of Induced Seismicity in Northeastern Pennsylvania
NASA Astrophysics Data System (ADS)
Martone, P.; Nikulin, A.; Pietras, J.
2017-12-01
The link between induced seismicity and injection of hydraulic fracturing wastewater has largely been accepted and corroborated through case studies in Colorado, Arkansas, Texas, and Oklahoma. To date, induced seismicity has largely impacted hydrocarbon-producing regions in the Central United States, while the seismic response in Eastern states, like Pennsylvania, has been relatively muted. In recent years, Pennsylvania exponentially increased hydrocarbon production from the Marcellus and Utica Shales and our results indicate that this activity has triggered an onset of induced seismicity in areas of the state where no previous seismic activity was reported. Three recent earthquakes in Northeastern Pennsylvania directly correlate to hydraulic fracturing activity, though USGS NEIC earthquake catalog locations have vertical errors up to 31km. We present signal analysis results of recorded waveforms of the three identified events and results of a high-precision relocation effort and improvements to the regional velocity model aimed at constraining the horizontal and vertical error in hypocenter position. We show that at least one event is positioned directly along the wellbore track of an active well and correlate its timing to the hydraulic fracturing schedule. Results show that in the absence of wastewater disposal in this area, it is possible to confidently make the connection between the hydraulic fracturing process and induced seismicity.
DNA-damage response during mitosis induces whole-chromosome missegregation.
Bakhoum, Samuel F; Kabeche, Lilian; Murnane, John P; Zaki, Bassem I; Compton, Duane A
2014-11-01
Many cancers display both structural (s-CIN) and numerical (w-CIN) chromosomal instabilities. Defective chromosome segregation during mitosis has been shown to cause DNA damage that induces structural rearrangements of chromosomes (s-CIN). In contrast, whether DNA damage can disrupt mitotic processes to generate whole chromosomal instability (w-CIN) is unknown. Here, we show that activation of the DNA-damage response (DDR) during mitosis selectively stabilizes kinetochore-microtubule (k-MT) attachments to chromosomes through Aurora-A and PLK1 kinases, thereby increasing the frequency of lagging chromosomes during anaphase. Inhibition of DDR proteins, ATM or CHK2, abolishes the effect of DNA damage on k-MTs and chromosome segregation, whereas activation of the DDR in the absence of DNA damage is sufficient to induce chromosome segregation errors. Finally, inhibiting the DDR during mitosis in cancer cells with persistent DNA damage suppresses inherent chromosome segregation defects. Thus, the DDR during mitosis inappropriately stabilizes k-MTs, creating a link between s-CIN and w-CIN. The genome-protective role of the DDR depends on its ability to delay cell division until damaged DNA can be fully repaired. Here, we show that when DNA damage is induced during mitosis, the DDR unexpectedly induces errors in the segregation of entire chromosomes, thus linking structural and numerical chromosomal instabilities. ©2014 American Association for Cancer Research.
Optical Testing of Retroreflectors for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Ohl, Raymond G.; Frey, Bradley J.; Stock, Joseph M.; McMann, Joseph C.; Zukowiski, Tmitri J.
2010-01-01
A laser tracker (LT) is an important coordinate metrology tool that uses laser interferometry to determine precise distances to objects, points, or surfaces defined by an optical reference, such as a retroreflector. A retroreflector is a precision optic consisting of three orthogonal faces that returns an incident laser beam nearly exactly parallel to the incident beam. Commercial retroreflectors are designed for operation at room temperature and are specified by the divergence, or beam deviation, of the returning laser beam, usually a few arcseconds or less. When a retroreflector goes to extreme cold (.35 K), however, it could be anticipated that the precision alignment between the three faces and the surface figure of each face would be compromised, resulting in wavefront errors and beam divergence, degrading the accuracy of the LT position determination. Controlled tests must be done beforehand to determine survivability and these LT coordinate errors. Since conventional interferometer systems and laser trackers do not operate in vacuum or at cold temperatures, measurements must be done through a vacuum window, and care must be taken to ensure window-induced errors are negligible, or can be subtracted out. Retroreflector holders must be carefully designed to minimize thermally induced stresses. Changes in the path length and refractive index of the retroreflector have to be considered. Cryogenic vacuum testing was done on commercial solid glass retroreflectors for use on cryogenic metrology tasks. The capabilities to measure wavefront errors, measure beam deviations, and acquire laser tracker coordinate data were demonstrated. Measurable but relatively small increases in beam deviation were shown, and further tests are planned to make an accurate determination of coordinate errors.
Norman, Geoffrey R; Monteiro, Sandra D; Sherbino, Jonathan; Ilgen, Jonathan S; Schmidt, Henk G; Mamede, Silvia
2017-01-01
Contemporary theories of clinical reasoning espouse a dual processing model, which consists of a rapid, intuitive component (Type 1) and a slower, logical and analytical component (Type 2). Although the general consensus is that this dual processing model is a valid representation of clinical reasoning, the causes of diagnostic errors remain unclear. Cognitive theories about human memory propose that such errors may arise from both Type 1 and Type 2 reasoning. Errors in Type 1 reasoning may be a consequence of the associative nature of memory, which can lead to cognitive biases. However, the literature indicates that, with increasing expertise (and knowledge), the likelihood of errors decreases. Errors in Type 2 reasoning may result from the limited capacity of working memory, which constrains computational processes. In this article, the authors review the medical literature to answer two substantial questions that arise from this work: (1) To what extent do diagnostic errors originate in Type 1 (intuitive) processes versus in Type 2 (analytical) processes? (2) To what extent are errors a consequence of cognitive biases versus a consequence of knowledge deficits?The literature suggests that both Type 1 and Type 2 processes contribute to errors. Although it is possible to experimentally induce cognitive biases, particularly availability bias, the extent to which these biases actually contribute to diagnostic errors is not well established. Educational strategies directed at the recognition of biases are ineffective in reducing errors; conversely, strategies focused on the reorganization of knowledge to reduce errors have small but consistent benefits.
Topographical gradients of semantics and phonology revealed by temporal lobe stimulation.
Miozzo, Michele; Williams, Alicia C; McKhann, Guy M; Hamberger, Marla J
2017-02-01
Word retrieval is a fundamental component of oral communication, and it is well established that this function is supported by left temporal cortex. Nevertheless, the specific temporal areas mediating word retrieval and the particular linguistic processes these regions support have not been well delineated. Toward this end, we analyzed over 1000 naming errors induced by left temporal cortical stimulation in epilepsy surgery patients. Errors were primarily semantic (lemon → "pear"), phonological (horn → "corn"), non-responses, and delayed responses (correct responses after a delay), and each error type appeared predominantly in a specific region: semantic errors in mid-middle temporal gyrus (TG), phonological errors and delayed responses in middle and posterior superior TG, and non-responses in anterior inferior TG. To the extent that semantic errors, phonological errors and delayed responses reflect disruptions in different processes, our results imply topographical specialization of semantic and phonological processing. Specifically, results revealed an inferior-to-superior gradient, with more superior regions associated with phonological processing. Further, errors were increasingly semantically related to targets toward posterior temporal cortex. We speculate that detailed semantic input is needed to support phonological retrieval, and thus, the specificity of semantic input increases progressively toward posterior temporal regions implicated in phonological processing. Hum Brain Mapp 38:688-703, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Thibodeaux, J. J.
1977-01-01
The results of a simulation study performed to determine the effects of gyro verticality error on lateral autoland tracking and landing performance are presented. A first order vertical gyro error model was used to generate the measurement of the roll attitude feedback signal normally supplied by an inertial navigation system. The lateral autoland law used was an inertially smoothed control design. The effect of initial angular gyro tilt errors (2 deg, 3 deg, 4 deg, and 5 deg), introduced prior to localizer capture, were investigated by use of a small perturbation aircraft simulation. These errors represent the deviations which could occur in the conventional attitude sensor as a result of the maneuver-induced spin-axis misalinement and drift. Results showed that for a 1.05 deg per minute erection rate and a 5 deg initial tilt error, ON COURSE autoland control logic was not satisfied. Failure to attain the ON COURSE mode precluded high control loop gains and localizer beam path integration and resulted in unacceptable beam standoff at touchdown.
A preliminary estimate of geoid-induced variations in repeat orbit satellite altimeter observations
NASA Technical Reports Server (NTRS)
Brenner, Anita C.; Beckley, B. D.; Koblinsky, C. J.
1990-01-01
Altimeter satellites are often maintained in a repeating orbit to facilitate the separation of sea-height variations from the geoid. However, atmospheric drag and solar radiation pressure cause a satellite orbit to drift. For Geosat this drift causes the ground track to vary by + or - 1 km about the nominal repeat path. This misalignment leads to an error in the estimates of sea surface height variations because of the local slope in the geoid. This error has been estimated globally for the Geosat Exact Repeat Mission using a mean sea surface constructed from Geos 3 and Seasat altimeter data. Over most of the ocean the geoid gradient is small, and the repeat-track misalignment leads to errors of only 1 to 2 cm. However, in the vicinity of trenches, continental shelves, islands, and seamounts, errors can exceed 20 cm. The estimated error is compared with direct estimates from Geosat altimetry, and a strong correlation is found in the vicinity of the Tonga and Aleutian trenches. This correlation increases as the orbit error is reduced because of the increased signal-to-noise ratio.
NASA Technical Reports Server (NTRS)
Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.
2012-01-01
This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.
Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.
2016-01-01
Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915
Thermal error analysis and compensation for digital image/volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
NASA Astrophysics Data System (ADS)
Li, Li; Li, Zhengqiang; Li, Kaitao; Sun, Bin; Wu, Yanke; Xu, Hua; Xie, Yisong; Goloub, Philippe; Wendisch, Manfred
2018-04-01
In this study errors of the relative orientations of polarizers in the Cimel polarized sun-sky radiometers are measured and introduced into the Mueller matrix of the instrument. The linearly polarized light with different polarization directions from 0° to 180° (or 360°) is generated by using a rotating linear polarizer in front of an integrating sphere. Through measuring the referential linearly polarized light, the errors of relative orientations of polarizers are determined. The efficiencies of the polarizers are obtained simultaneously. By taking the error of relative orientation into consideration in the Mueller matrix, the accuracies of the calculated Stokes parameters, the degree of linear polarization, and the angle of polarization are remarkably improved. The method may also apply to other polarization instruments of similar types.
Lateral charge transport from heavy-ion tracks in integrated circuit chips
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Schwartz, H. R.; Nevill, L. R.
1988-01-01
A 256K DRAM has been used to study the lateral transport of charge (electron-hole pairs) induced by direct ionization from heavy-ion tracks in an IC. The qualitative charge transport has been simulated using a two-dimensional numerical code in cylindrical coordinates. The experimental bit-map data clearly show the manifestation of lateral charge transport in the creation of adjacent multiple-bit errors from a single heavy-ion track. The heavy-ion data further demonstrate the occurrence of multiple-bit errors from single ion tracks with sufficient stopping power. The qualitative numerical simulation results suggest that electric-field-funnel-aided (drift) collection accounts for single error generated by an ion passing through a charge-collecting junction, while multiple errors from a single ion track are due to lateral diffusion of ion-generated charge.
On the use of unshielded cables in ionization chamber dosimetry for total-skin electron therapy.
Chen, Z; Agostinelli, A; Nath, R
1998-03-01
The dosimetry of total-skin electron therapy (TSET) usually requires ionization chamber measurements in a large electron beam (up to 120 cm x 200 cm). Exposing the chamber's electric cable, its connector and part of the extension cable to the large electron beam will introduce unwanted electronic signals that may lead to inaccurate dosimetry results. While the best strategy to minimize the cable-induced electronic signal is to shield the cables and its connector from the primary electrons, as has been recommended by the AAPM Task Group Report 23 on TSET, cables without additional shielding are often used in TSET dosimetry measurements for logistic reasons, for example when an automatic scanning dosimetry is used. This paper systematically investigates the consequences and the acceptability of using an unshielded cable in ionization chamber dosimetry in a large TSET electron beam. In this paper, we separate cable-induced signals into two types. The type-I signal includes all charges induced which do not change sign upon switching the chamber polarity, and type II includes all those that do. The type-I signal is easily cancelled by the polarity averaging method. The type-II cable-induced signal is independent of the depth of the chamber in a phantom and its magnitude relative to the true signal determines the acceptability of a cable for use under unshielded conditions. Three different cables were evaluated in two different TSET beams in this investigation. For dosimetry near the depth of maximum buildup, the cable-induced dosimetry error was found to be less than 0.2% when the two-polarity averaging technique was applied. At greater depths, the relative dosimetry error was found to increase at a rate approximately equal to the inverse of the electron depth dose. Since the application of the two-polarity averaging technique requires a constant-irradiation condition, it was demonstrated than an additional error of up to 4% could be introduced if the unshielded cable's spatial configuration were altered during the two-polarity measurements. This suggests that automatic scanning systems with unshielded cables should not be used in TSET ionization chamber dosimetry. However, the data did show that an unshielded cable may be used in TSET ionization chamber dosimetry if the size of cable-induced error in a given TSET beam is pre-evaluated and the measurement is carefully conducted. When such an evaluation has not been performed, additional shielding should be applied to the cable being used, making measurements at multiple points difficult.
Somatic immunoglobulin hypermutation
Diaz, Marilyn; Casali, Paolo
2015-01-01
Immunoglobulin hypermutation provides the structural correlate for the affinity maturation of the antibody response. Characteristic modalities of this mechanism include a preponderance of point-mutations with prevalence of transitions over transversions, and the mutational hotspot RGYW sequence. Recent evidence suggests a mechanism whereby DNA-breaks induce error-prone DNA synthesis in immunoglobulin V(D)J regions by error-prone DNA polymerases. The nature of the targeting mechanism and the trans-factors effecting such breaks and their repair remain to be determined. PMID:11869898
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inaba, Kensuke; Tamaki, Kiyoshi; Igeta, Kazuhiro
2014-12-04
In this study, we propose a method for generating cluster states of atoms in an optical lattice. By utilizing the quantum properties of Wannier orbitals, we create an tunable Ising interaction between atoms without inducing the spin-exchange interactions. We investigate the cause of errors that occur during entanglement generations, and then we propose an error-management scheme, which allows us to create high-fidelity cluster states in a short time.
NASA Astrophysics Data System (ADS)
Shabanov, S. V.; Gornushkin, I. B.
2018-01-01
Data processing in the calibration-free laser-induced breakdown spectroscopy (LIBS) is usually based on the solution of the radiative transfer equation along a particular line of sight through a plasma plume. The LIBS data processing is generalized to the case when the spectral data are collected from large portions of the plume. It is shown that by adjusting the optical depth and width of the lines the spectra obtained by collecting light from an entire spherical homogeneous plasma plume can be least-square fitted to a spectrum obtained by collecting the radiation just along a plume diameter with a relative error of 10-11 or smaller (for the optical depth not exceeding 0.3) so that a mismatch of geometries of data processing and data collection cannot be detected by fitting. Despite the existence of such a perfect least-square fit, the errors in the line optical depth and width found by a data processing with an inappropriate geometry can be large. It is shown with analytic and numerical examples that the corresponding relative errors in the found elemental number densities and concentrations may be as high as 50% and 20%, respectively. Safe for a few found exceptions, these errors are impossible to eliminate from LIBS data processing unless a proper solution of the radiative transfer equation corresponding to the ray tracing in the spectral data collection is used.
NASA Technical Reports Server (NTRS)
Gubarev, Mikhail V.; Kilaru, Kirenmayee; Ramsey, Brian D.
2009-01-01
We are investigating differential deposition as a way of correcting small figure errors inside full-shell grazing-incidence x-ray optics. The optics in our study are fabricated using the electroformed-nickel-replication technique, and the figure errors arise from fabrication errors in the mandrel, from which the shells are replicated, as well as errors induced during the electroforming process. Combined, these give sub-micron-scale figure deviations which limit the angular resolution of the optics to approx. 10 arcsec. Sub-micron figure errors can be corrected by selectively depositing (physical vapor deposition) material inside the shell. The requirements for this filler material are that it must not degrade the ultra-smooth surface finish necessary for efficient x-ray reflection (approx. 5 A rms), and must not be highly stressed. In addition, a technique must be found to produce well controlled and defined beams within highly constrained geometries, as some of our mirror shells are less than 3 cm in diameter.
NASA Astrophysics Data System (ADS)
Su, Yunquan; Yao, Xuefeng; Wang, Shen; Ma, Yinji
2017-03-01
An effective correction model is proposed to eliminate the refraction error effect caused by an optical window of a furnace in digital image correlation (DIC) deformation measurement under high-temperature environment. First, a theoretical correction model with the corresponding error correction factor is established to eliminate the refraction error induced by double-deck optical glass in DIC deformation measurement. Second, a high-temperature DIC experiment using a chromium-nickel austenite stainless steel specimen is performed to verify the effectiveness of the correction model by the correlation calculation results under two different conditions (with and without the optical glass). Finally, both the full-field and the divisional displacement results with refraction influence are corrected by the theoretical model and then compared to the displacement results extracted from the images without refraction influence. The experimental results demonstrate that the proposed theoretical correction model can effectively improve the measurement accuracy of DIC method by decreasing the refraction errors from measured full-field displacements under high-temperature environment.
NASA Astrophysics Data System (ADS)
Tedd, B. L.; Strangeways, H. J.; Jones, T. B.
1985-11-01
Systematic ionospheric tilts (SITs) at midlatitudes and the diurnal variation of bearing error for different transmission paths are examined. An explanation of diurnal variations of bearing error based on the dependence of ionospheric tilt on solar zenith angle and plasma transport processes is presented. The effect of vertical ion drift and the momentum transfer of neutral winds is investigated. During the daytime the transmissions are low and photochemical processes control SITs; however, at night transmissions are at higher heights and spatial and temporal variations of plasma transport processes influence SITs. A HF ray tracing technique which uses a three-dimensional ionospheric model based on predictions to simulate SIT-induced bearing errors is described; poor correlation with experimental data is observed and the causes for this are studied. A second model based on measured vertical-sounder data is proposed. Model two is applicable for predicting bearing error for a range of transmission paths and correlates well with experimental data.
On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A
NASA Astrophysics Data System (ADS)
Blake, J. B.; Mandel, R.
1987-02-01
The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.
ERIC Educational Resources Information Center
Spiro, Rand J.; And Others
This report argues that there exists a pervasive tendency for analogies to contribute to the development of entrenched misconceptions in the form of reducing complex new knowledge to the core of a source analogy. The report presents a taxonomy of ways that simple analogy induces conceptual error and an alternative approach involving integrated…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartley, R.; Kartz, M.; Behrendt, W.
1996-10-01
The laser wavefront of the NIF Beamlet demonstration system is corrected for static aberrations with a wavefront control system. The system operates closed loop with a probe beam prior to a shot and has a loop bandwidth of about 3 Hz. However, until recently the wavefront control system was disabled several minutes prior to the shot to allow time to manually reconfigure its attenuators and probe beam insertion mechanism to shot mode. Thermally-induced dynamic variations in gas density in the Beamlet main beam line produce significant wavefront error. After about 5-8 seconds, the wavefront error has increased to a new,more » higher level due to turbulence- induced aberrations no longer being corrected- This implies that there is a turbulence-induced aberration noise bandwidth of less than one Hertz, and that the wavefront controller could correct for the majority of turbulence-induced aberration (about one- third wave) by automating its reconfiguration to occur within one second of the shot, This modification was recently implemented on Beamlet; we call this modification the t{sub 0}-1 system.« less
Effect of wafer geometry on lithography chucking processes
NASA Astrophysics Data System (ADS)
Turner, Kevin T.; Sinha, Jaydeep K.
2015-03-01
Wafer flatness during exposure in lithography tools is critical and is becoming more important as feature sizes in devices shrink. While chucks are used to support and flatten the wafer during exposure, it is essential that wafer geometry be controlled as well. Thickness variations of the wafer and high-frequency wafer shape components can lead to poor flatness of the chucked wafer and ultimately patterning problems, such as defocus errors. The objective of this work is to understand how process-induced wafer geometry, resulting from deposited films with non-uniform stress, can lead to high-frequency wafer shape variations that prevent complete chucking in lithography scanners. In this paper, we discuss both the acceptable limits of wafer shape that permit complete chucking to be achieved, and how non-uniform residual stresses in films, either due to patterning or process non-uniformity, can induce high spatial frequency wafer shape components that prevent chucking. This paper describes mechanics models that relate non-uniform film stress to wafer shape and presents results for two example cases. The models and results can be used as a basis for establishing control strategies for managing process-induced wafer geometry in order to avoid wafer flatness-induced errors in lithography processes.
NASA Technical Reports Server (NTRS)
Balla, R. Jeffrey; Miller, Corey A.
2008-01-01
This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.
Krawczyk, María C; Fernández, Rodrigo S; Pedreira, María E; Boccia, Mariano M
2017-07-01
Experimental psychology defines Prediction Error (PE) as a mismatch between expected and current events. It represents a unifier concept within the memory field, as it is the driving force of memory acquisition and updating. Prediction error induces updating of consolidated memories in strength or content by memory reconsolidation. This process has two different neurobiological phases, which involves the destabilization (labilization) of a consolidated memory followed by its restabilization. The aim of this work is to emphasize the functional role of PE on the neurobiology of learning and memory, integrating and discussing different research areas: behavioral, neurobiological, computational and clinical psychiatry. Copyright © 2016 Elsevier Inc. All rights reserved.
Quantifying Carbon Flux Estimation Errors
NASA Astrophysics Data System (ADS)
Wesloh, D.
2017-12-01
Atmospheric Bayesian inversions have been used to estimate surface carbon dioxide (CO2) fluxes from global to sub-continental scales using atmospheric mixing ratio measurements. These inversions use an atmospheric transport model, coupled to a set of fluxes, in order to simulate mixing ratios that can then be compared to the observations. The comparison is then used to update the fluxes to better match the observations in a manner consistent with the uncertainties prescribed for each. However, inversion studies disagree with each other at continental scales, prompting further investigations to examine the causes of these differences. Inter-comparison studies have shown that the errors resulting from atmospheric transport inaccuracies are comparable to those from the errors in the prior fluxes. However, not as much effort has gone into studying the origins of the errors induced by errors in the transport as by errors in the prior distribution. This study uses a mesoscale transport model to evaluate the effects of representation errors in the observations and of incorrect descriptions of the transport. To obtain realizations of these errors, we performed an Observing System Simulation Experiments (OSSEs), with the transport model used for the inversion operating at two resolutions, one typical of a global inversion and the other of a mesoscale, and with various prior flux distributions to. Transport error covariances are inferred from an ensemble of perturbed mesoscale simulations while flux error covariances are computed using prescribed distributions and magnitudes. We examine how these errors can be diagnosed in the inversion process using aircraft, ground-based, and satellite observations of meteorological variables and CO2.
Liu, Yan; Wang, Yuexin; Lv, Huibin; Jiang, Xiaodan; Zhang, Mingzhou
2017-01-01
Purpose To investigate the efficacy of α-adrenergic agonist brimonidine either alone or combined with pirenzepine for inhibiting progressing myopia in guinea pig lens–myopia-induced models. Methods Thirty-six guinea pigs were randomly divided into six groups: Group A received 2% pirenzepine, Group B received 0.2% brimonidine, Group C received 0.1% brimonidine, Group D received 2% pirenzepine + 0.2% brimonidine, Group E received 2% pirenzepine + 0.1% brimonidine, and Group F received the medium. Myopia was induced in the right eyes of all guinea pigs using polymethyl methacrylate (PMMA) lenses for 3 weeks. Eye drops were administered accordingly. Intraocular pressure was measured every day. Refractive error and axial length measurements were performed once a week. The enucleated eyeballs were removed for hematoxylin and eosin (H&E) and Van Gieson (VG) staining at the end of the study. Results The lens-induced myopia model was established after 3 weeks. Treatment with 0.1% brimonidine alone and 0.2% brimonidine alone was capable of inhibiting progressing myopia, as shown by the better refractive error (p=0.024; p=0.006) and shorter axial length (p=0.005; p=0.0017). Treatment with 0.1% brimonidine and 0.2% brimonidine combined with 2% pirenzepine was also effective in suppressing progressing refractive error (p=0.016; p=0.0006) and axial length (p=0.017; p=0.0004). The thickness of the sclera was kept stable in all groups except group F; the sclera was much thinner in the lens-induced myopia eyes compared to the control eyes. Conclusions Treatment with 0.1% brimonidine alone and 0.2% brimonidine alone, as well as combined with 2% pirenzepine, was effective in inhibiting progressing myopia. The result indicates that intraocular pressure elevation is possibly a promising mechanism and potential treatment for progressing myopia. PMID:29204068
Nikolaitchik, Olga A.; Burdick, Ryan C.; Gorelick, Robert J.; Keele, Brandon F.; Hu, Wei-Shau; Pathak, Vinay K.
2016-01-01
Although the predominant effect of host restriction APOBEC3 proteins on HIV-1 infection is to block viral replication, they might inadvertently increase retroviral genetic variation by inducing G-to-A hypermutation. Numerous studies have disagreed on the contribution of hypermutation to viral genetic diversity and evolution. Confounding factors contributing to the debate include the extent of lethal (stop codon) and sublethal hypermutation induced by different APOBEC3 proteins, the inability to distinguish between G-to-A mutations induced by APOBEC3 proteins and error-prone viral replication, the potential impact of hypermutation on the frequency of retroviral recombination, and the extent to which viral recombination occurs in vivo, which can reassort mutations in hypermutated genomes. Here, we determined the effects of hypermutation on the HIV-1 recombination rate and its contribution to genetic variation through recombination to generate progeny genomes containing portions of hypermutated genomes without lethal mutations. We found that hypermutation did not significantly affect the rate of recombination, and recombination between hypermutated and wild-type genomes only increased the viral mutation rate by 3.9 × 10−5 mutations/bp/replication cycle in heterozygous virions, which is similar to the HIV-1 mutation rate. Since copackaging of hypermutated and wild-type genomes occurs very rarely in vivo, recombination between hypermutated and wild-type genomes does not significantly contribute to the genetic variation of replicating HIV-1. We also analyzed previously reported hypermutated sequences from infected patients and determined that the frequency of sublethal mutagenesis for A3G and A3F is negligible (4 × 10−21 and1 × 10−11, respectively) and its contribution to viral mutations is far below mutations generated during error-prone reverse transcription. Taken together, we conclude that the contribution of APOBEC3-induced hypermutation to HIV-1 genetic variation is substantially lower than that from mutations during error-prone replication. PMID:27186986
Delviks-Frankenberry, Krista A; Nikolaitchik, Olga A; Burdick, Ryan C; Gorelick, Robert J; Keele, Brandon F; Hu, Wei-Shau; Pathak, Vinay K
2016-05-01
Although the predominant effect of host restriction APOBEC3 proteins on HIV-1 infection is to block viral replication, they might inadvertently increase retroviral genetic variation by inducing G-to-A hypermutation. Numerous studies have disagreed on the contribution of hypermutation to viral genetic diversity and evolution. Confounding factors contributing to the debate include the extent of lethal (stop codon) and sublethal hypermutation induced by different APOBEC3 proteins, the inability to distinguish between G-to-A mutations induced by APOBEC3 proteins and error-prone viral replication, the potential impact of hypermutation on the frequency of retroviral recombination, and the extent to which viral recombination occurs in vivo, which can reassort mutations in hypermutated genomes. Here, we determined the effects of hypermutation on the HIV-1 recombination rate and its contribution to genetic variation through recombination to generate progeny genomes containing portions of hypermutated genomes without lethal mutations. We found that hypermutation did not significantly affect the rate of recombination, and recombination between hypermutated and wild-type genomes only increased the viral mutation rate by 3.9 × 10-5 mutations/bp/replication cycle in heterozygous virions, which is similar to the HIV-1 mutation rate. Since copackaging of hypermutated and wild-type genomes occurs very rarely in vivo, recombination between hypermutated and wild-type genomes does not significantly contribute to the genetic variation of replicating HIV-1. We also analyzed previously reported hypermutated sequences from infected patients and determined that the frequency of sublethal mutagenesis for A3G and A3F is negligible (4 × 10-21 and1 × 10-11, respectively) and its contribution to viral mutations is far below mutations generated during error-prone reverse transcription. Taken together, we conclude that the contribution of APOBEC3-induced hypermutation to HIV-1 genetic variation is substantially lower than that from mutations during error-prone replication.
Signal-Induced Noise Effects in a Photon Counting System for Stratospheric Ozone Measurement
NASA Technical Reports Server (NTRS)
Harper, David B.; DeYoung, Russell J.
1998-01-01
A significant source of error in making atmospheric differential absorption lidar ozone measurements is the saturation of the photomultiplier tube by the strong, near field light return. Some time after the near field light signal is gone, the photomultiplier tube gate is opened and a noise signal, called signal-induced noise, is observed. Research reported here gives experimental results from measurement of photomultiplier signal-induced noise. Results show that signal-induced noise has several decaying exponential signals, suggesting that electrons are slowly emitted from different surfaces internal to the photomultiplier tube.
Shireen, Erum; Bint-E-Ali, Wafa; Shafaq, Sania; Majeed, Azka; Fatima, Rija; Masroor, Maria; Haleem, Darakshan J
2015-12-01
This article has been removed: please see Elsevier Policy on Article Withdrawal (https://www.elsevier.com/about/our-business/policies/article-withdrawal) This meeting abstract has been removed by the Publisher. Due to an administrative error, abstracts that were not presented at the ISDN 2014 meeting were inadvertently published in the meeting's abstract supplement. The Publisher apologizes to the authors and readers for this error. Copyright © 2015. Published by Elsevier Ltd.
How Alterations in the Cdt1 Expression Lead to Gene Amplification in Breast Cancer
2011-07-01
absence of extrinsic DNA damage. We measured the TLS activity by measuring the mutation frequency in a supF gene (in a shuttle vector) subjected to UV...induced DNA damage before its introduction into the cells. Error-prone TLS activity will mutate the supF gene , which is scored by a blue-white colony...Figure 4A). Sequencing of the mutant supF genes , revealed a mutation spectrum consistent with error prone TLS (Supplemental Table 1). Significantly
NASA Technical Reports Server (NTRS)
Zwally, H. Jay; Brenner, Anita C.; Major, Judith A.; Martin, Thomas V.; Bindschadler, Robert A.
1990-01-01
The data-processing methods and ice data products derived from Seasat radar altimeter measurements over the Greenland ice sheet and surrounding sea ice are documented. The corrections derived and applied to the Seasat radar altimeter data over ice are described in detail, including the editing and retracking algorithm to correct for height errors caused by lags in the automatic range tracking circuit. The methods for radial adjustment of the orbits and estimation of the slope-induced errors are given.
High storage capacity in the Hopfield model with auto-interactions—stability analysis
NASA Astrophysics Data System (ADS)
Rocchi, Jacopo; Saad, David; Tantari, Daniele
2017-11-01
Recent studies point to the potential storage of a large number of patterns in the celebrated Hopfield associative memory model, well beyond the limits obtained previously. We investigate the properties of new fixed points to discover that they exhibit instabilities for small perturbations and are therefore of limited value as associative memories. Moreover, a large deviations approach also shows that errors introduced to the original patterns induce additional errors and increased corruption with respect to the stored patterns.
Error-free replicative bypass of (6–4) photoproducts by DNA polymerase ζ in mouse and human cells
Yoon, Jung-Hoon; Prakash, Louise; Prakash, Satya
2010-01-01
The ultraviolet (UV)-induced (6–4) pyrimidine–pyrimidone photoproduct [(6–4) PP] confers a large structural distortion in DNA. Here we examine in human cells the roles of translesion synthesis (TLS) DNA polymerases (Pols) in promoting replication through a (6–4) TT photoproduct carried on a duplex plasmid where bidirectional replication initiates from an origin of replication. We show that TLS contributes to a large fraction of lesion bypass and that it is mostly error-free. We find that, whereas Pol η and Pol ι provide alternate pathways for mutagenic TLS, surprisingly, Pol ζ functions independently of these Pols and in a predominantly error-free manner. We verify and extend these observations in mouse cells and conclude that, in human cells, TLS during replication can be markedly error-free even opposite a highly distorting DNA lesion. PMID:20080950
NASA Astrophysics Data System (ADS)
Melendez, Jordan; Wesolowski, Sarah; Furnstahl, Dick
2017-09-01
Chiral effective field theory (EFT) predictions are necessarily truncated at some order in the EFT expansion, which induces an error that must be quantified for robust statistical comparisons to experiment. A Bayesian model yields posterior probability distribution functions for these errors based on expectations of naturalness encoded in Bayesian priors and the observed order-by-order convergence pattern of the EFT. As a general example of a statistical approach to truncation errors, the model was applied to chiral EFT for neutron-proton scattering using various semi-local potentials of Epelbaum, Krebs, and Meißner (EKM). Here we discuss how our model can learn correlation information from the data and how to perform Bayesian model checking to validate that the EFT is working as advertised. Supported in part by NSF PHY-1614460 and DOE NUCLEI SciDAC DE-SC0008533.
Induced mood and selective attention.
Brand, N; Verspui, L; Oving, A
1997-04-01
Subjects (N = 60) were randomly assigned to an elated, depressed, or neutral mood-induction condition to assess the effect of mood state on cognitive functioning. In the elated condition film fragments expressing happiness and euphoria were shown. In the depressed condition some frightening and distressing film fragments were presented. The neutral group watched no film. Mood states were measured using the Profile of Mood States, and a Stroop task assessed selective attention. Both were presented by computer. The induction groups differed significantly in the expected direction on the mood subscales Anger, Tension, Depression, Vigour, and Fatigue, and also in the mean scale response times, i.e., slower responses for the depressed condition and faster for the elated one. Differences between conditions were found in the errors on the Stroop: in the depressed condition were the fewest errors and significantly longer error reaction times. Speed of error was associated with self-reported fatigue.
NASA Technical Reports Server (NTRS)
Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas
2013-01-01
In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.
Nonlinear effects of stretch on the flame front propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halter, F.; Tahtouh, T.; Mounaim-Rousselle, C.
2010-10-15
In all experimental configurations, the flames are affected by stretch (curvature and/or strain rate). To obtain the unstretched flame speed, independent of the experimental configuration, the measured flame speed needs to be corrected. Usually, a linear relationship linking the flame speed to stretch is used. However, this linear relation is the result of several assumptions, which may be incorrected. The present study aims at evaluating the error in the laminar burning speed evaluation induced by using the traditional linear methodology. Experiments were performed in a closed vessel at atmospheric pressure for two different mixtures: methane/air and iso-octane/air. The initial temperaturesmore » were respectively 300 K and 400 K for methane and iso-octane. Both methodologies (linear and nonlinear) are applied and results in terms of laminar speed and burned gas Markstein length are compared. Methane and iso-octane were chosen because they present opposite evolutions in their Markstein length when the equivalence ratio is increased. The error induced by the linear methodology is evaluated, taking the nonlinear methodology as the reference. It is observed that the use of the linear methodology starts to induce substantial errors after an equivalence ratio of 1.1 for methane/air mixtures and before an equivalence ratio of 1 for iso-octane/air mixtures. One solution to increase the accuracy of the linear methodology for these critical cases consists in reducing the number of points used in the linear methodology by increasing the initial flame radius used. (author)« less
Children Induce an Enhanced Attentional Blink in Child Molesters
ERIC Educational Resources Information Center
Beech, Anthony R.; Kalmus, Ellis; Tipper, Steven P.; Baudouin, Jean-Yves; Flak, Vanja; Humphreys, Glyn W.
2008-01-01
The attentional blink (AB) is a robust phenomenon that has been consistently reported in the cognitive literature. The AB is found when two target images (T1, T2) are presented within 500 ms of each other and errors are induced on the perceptual report of T2. The AB may increase when T1 has some salience to the viewer. This study examined the…
TH-AB-201-07: Filmless Treatment Localization QA for the CyberKnife System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gersh, J; Spectrum Medical Physics, LLC, Greenville, SC; Noll, M
Purpose: Accuray recommends daily evaluation of the treatment localization and delivery systems (TLS/TDS) of the CyberKnife. The vendor-provided solution is a Winston-Lutz-type test that evaluates film shadows from an orthogonal beam pair (known as AQA). Since film-based techniques are inherently inefficient and potentially inconsistent and uncertain, this study explores a method which provides a comparable test with greater efficiency, consistency, and certainty. This test uses the QAStereoChecker (QASC, Standard Imaging, Inc., Middleton, WI), a high-resolution flat-panel detector with coupled fiducial markers for automated alignment. Fiducial tracking is used to achieve high translational and rotational position accuracy. Methods: A plan ismore » generated delivering five circular beams, with varying orientation and angular incidence. Several numeric quantities are calculated for each beam: eccentricity, centroid location, area, major-axis length, minor-axis length, and orientation angle. Baseline values were acquired and repeatability of baselines analyzed. Next, errors were induced in the path calibration of the CK, and the test repeated. A correlative study was performed between the induced errors and quantities measured using the QASC. Based on vendor recommendations, this test should be able to detect a TLS/TDS offset of 0.5mm. Results: Centroid shifts correlated well with induced plane-perpendicular offsets (p < 0.01). Induced vertical shifts correlated best with the absolute average deviation of eccentricities (p < 0.05). The values of these metrics which correlated with the threshold of 0.5mm induced deviation were used as individual pass/fail criteria. These were then used to evaluate induced offsets which shifted the CK in all axes (a clinically-realistic offset), with a total offset of 0.5mm. This test provided high and specificity and sensitivity. Conclusion: From setup to analysis, this filmless TLS/TDS test requires 4 minutes, as opposed to 15–20 minutes for film-based methods. The techniques introduced can potentially isolate errors in individual joints of the CK robot. Spectrum Medical Physics, LLC of Greenville, SC has a consulting contract with Standard Imaging of Middleton, WI.« less
Palttala, Iida; Heinämäki, Jyrki; Honkanen, Outi; Suominen, Risto; Antikainen, Osmo; Hirvonen, Jouni; Yliruusi, Jouko
2013-03-01
To date, little is known on applicability of different types of pharmaceutical dosage forms in an automated high-speed multi-dose dispensing process. The purpose of the present study was to identify and further investigate various process-induced and/or product-related limitations associated with multi-dose dispensing process. The rates of product defects and dose dispensing errors in automated multi-dose dispensing were retrospectively investigated during a 6-months follow-up period. The study was based on the analysis of process data of totally nine automated high-speed multi-dose dispensing systems. Special attention was paid to the dependence of multi-dose dispensing errors/product defects and pharmaceutical tablet properties (such as shape, dimensions, weight, scored lines, coatings, etc.) to profile the most suitable forms of tablets for automated dose dispensing systems. The relationship between the risk of errors in dose dispensing and tablet characteristics were visualized by creating a principal component analysis (PCA) model for the outcome of dispensed tablets. The two most common process-induced failures identified in the multi-dose dispensing are predisposal of tablet defects and unexpected product transitions in the medication cassette (dose dispensing error). The tablet defects are product-dependent failures, while the tablet transitions are dependent on automated multi-dose dispensing systems used. The occurrence of tablet defects is approximately twice as common as tablet transitions. Optimal tablet preparation for the high-speed multi-dose dispensing would be a round-shaped, relatively small/middle-sized, film-coated tablet without any scored line. Commercial tablet products can be profiled and classified based on their suitability to a high-speed multi-dose dispensing process.
Ionospheric Impacts on UHF Space Surveillance
NASA Astrophysics Data System (ADS)
Jones, J. C.
2017-12-01
Earth's atmosphere contains regions of ionized plasma caused by the interaction of highly energetic solar radiation. This region of ionization is called the ionosphere and varies significantly with altitude, latitude, local solar time, season, and solar cycle. Significant ionization begins at about 100 km (E layer) with a peak in the ionization at about 300 km (F2 layer). Above the F2 layer, the atmosphere is mostly ionized but the ion and electron densities are low due to the unavailability of neutral molecules for ionization so the density decreases exponentially with height to well over 1000 km. The gradients of these variations in the ionosphere play a significant role in radio wave propagation. These gradients induce variations in the index of refraction and cause some radio waves to refract. The amount of refraction depends on the magnitude and direction of the electron density gradient and the frequency of the radio wave. The refraction is significant at HF frequencies (3-30 MHz) with decreasing effects toward the UHF (300-3000 MHz) range. UHF is commonly used for tracking of space objects in low Earth orbit (LEO). While ionospheric refraction is small for UHF frequencies, it can cause errors in range, azimuth angle, and elevation angle estimation by ground-based radars tracking space objects. These errors can cause significant errors in precise orbit determinations. For radio waves transiting the ionosphere, it is important to understand and account for these effects. Using a sophisticated radio wave propagation tool suite and an empirical ionospheric model, we calculate the errors induced by the ionosphere in a simulation of a notional space surveillance radar tracking objects in LEO. These errors are analyzed to determine daily, monthly, annual, and solar cycle trends. Corrections to surveillance radar measurements can be adapted from our simulation capability.
Error sources affecting thermocouple thermometry in RF electromagnetic fields.
Chakraborty, D P; Brezovich, I A
1982-03-01
Thermocouple thermometry errors in radiofrequency (typically 13, 56 MHZ) electromagnetic fields such as are encountered in hyperthermia are described. RF currents capacitatively or inductively coupled into the thermocouple-detector circuit produce errors which are a combination of interference, i.e., 'pick-up' error, and genuine rf induced temperature changes at the junction of the thermocouple. The former can be eliminated by adequate filtering and shielding; the latter is due to (a) junction current heating in which the generally unequal resistances of the thermocouple wires cause a net current flow from the higher to the lower resistance wire across the junction, (b) heating in the surrounding resistive material (tissue in hyperthermia), and (c) eddy current heating of the thermocouple wires in the oscillating magnetic field. Low frequency theories are used to estimate these errors under given operating conditions and relevant experiments demonstrating these effects and precautions necessary to minimize the errors are described. It is shown that at 13.56 MHz and voltage levels below 100 V rms these errors do not exceed 0.1 degrees C if the precautions are observed and thermocouples with adequate insulation (e.g., Bailey IT-18) are used. Results of this study are being currently used in our clinical work with good success.
Differential sea-state bias: A case study using TOPEX/POSEIDON data
NASA Technical Reports Server (NTRS)
Stewart, Robert H.; Devalla, B.
1994-01-01
We used selected data from the NASA altimeter TOPEX/POSEIDON to calculate differences in range measured by the C and Ku-band altimeters when the satellite overflew 5 to 15 m waves late at night. The range difference is due to free electrons in the ionosphere and to errors in sea-state bias. For the selected data the ionospheric influence on Ku range is less than 2 cm. Any difference in range over short horizontal distances is due only to a small along-track variability of the ionosphere and to errors in calculating the differential sea-state bias. We find that there is a barely detectable error in the bias in the geophysical data records. The wave-induced error in the ionospheric correction is less than 0.2% of significant wave height. The equivalent error in differential range is less than 1% of wave height. Errors in the differential sea-state bias calculations appear to be small even for extreme wave heights that greatly exceed the conditions on which the bias is based. The results also improved our confidence in the sea-state bias correction used for calculating the geophysical data records. Any error in the correction must influence Ku and C-band ranges almost equally.
Evaluation of lens distortion errors in video-based motion analysis
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Wilmington, Robert; Klute, Glenn K.; Micocci, Angelo
1993-01-01
In an effort to study lens distortion errors, a grid of points of known dimensions was constructed and videotaped using a standard and a wide-angle lens. Recorded images were played back on a VCR and stored on a personal computer. Using these stored images, two experiments were conducted. Errors were calculated as the difference in distance from the known coordinates of the points to the calculated coordinates. The purposes of this project were as follows: (1) to develop the methodology to evaluate errors introduced by lens distortion; (2) to quantify and compare errors introduced by use of both a 'standard' and a wide-angle lens; (3) to investigate techniques to minimize lens-induced errors; and (4) to determine the most effective use of calibration points when using a wide-angle lens with a significant amount of distortion. It was seen that when using a wide-angle lens, errors from lens distortion could be as high as 10 percent of the size of the entire field of view. Even with a standard lens, there was a small amount of lens distortion. It was also found that the choice of calibration points influenced the lens distortion error. By properly selecting the calibration points and avoidance of the outermost regions of a wide-angle lens, the error from lens distortion can be kept below approximately 0.5 percent with a standard lens and 1.5 percent with a wide-angle lens.
NASA Astrophysics Data System (ADS)
Duan, Wansuo; Zhao, Peng
2017-04-01
Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.
NASA Astrophysics Data System (ADS)
Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.
2015-06-01
Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).
NASA Astrophysics Data System (ADS)
Chen, Dongju; Huo, Chen; Cui, Xianxian; Pan, Ri; Fan, Jinwei; An, Chenhui
2018-05-01
The objective of this work is to study the influence of error induced by gas film in micro-scale on the static and dynamic behavior of a shaft supported by the aerostatic bearings. The static and dynamic balance models of the aerostatic bearing are presented by the calculated stiffness and damping in micro scale. The static simulation shows that the deformation of aerostatic spindle system in micro scale is decreased. For the dynamic behavior, both the stiffness and damping in axial and radial directions are increased in micro scale. The experiments of the stiffness and rotation error of the spindle show that the deflection of the shaft resulting from the calculating parameters in the micro scale is very close to the deviation of the spindle system. The frequency information in transient analysis is similar to the actual test, and they are also higher than the results from the traditional case without considering micro factor. Therefore, it can be concluded that the value considering micro factor is closer to the actual work case of the aerostatic spindle system. These can provide theoretical basis for the design and machining process of machine tools.
Kalman Filtered MR Temperature Imaging for Laser Induced Thermal Therapies
Fuentes, D.; Yung, J.; Hazle, J. D.; Weinberg, J. S.; Stafford, R. J.
2013-01-01
The feasibility of using a stochastic form of Pennes bioheat model within a 3D finite element based Kalman filter (KF) algorithm is critically evaluated for the ability to provide temperature field estimates in the event of magnetic resonance temperature imaging (MRTI) data loss during laser induced thermal therapy (LITT). The ability to recover missing MRTI data was analyzed by systematically removing spatiotemporal information from a clinical MR-guided LITT procedure in human brain and comparing predictions in these regions to the original measurements. Performance was quantitatively evaluated in terms of a dimensionless L2 (RMS) norm of the temperature error weighted by acquisition uncertainty. During periods of no data corruption, observed error histories demonstrate that the Kalman algorithm does not alter the high quality temperature measurement provided by MR thermal imaging. The KF-MRTI implementation considered is seen to predict the bioheat transfer with RMS error < 4 for a short period of time, Δt < 10sec, until the data corruption subsides. In its present form, the KF-MRTI method currently fails to compensate for consecutive for consecutive time periods of data loss Δt > 10sec. PMID:22203706
An Elimination Method of Temperature-Induced Linear Birefringence in a Stray Current Sensor
Xu, Shaoyi; Li, Wei; Xing, Fangfang; Wang, Yuqiao; Wang, Ruilin; Wang, Xianghui
2017-01-01
In this work, an elimination method of the temperature-induced linear birefringence (TILB) in a stray current sensor is proposed using the cylindrical spiral fiber (CSF), which produces a large amount of circular birefringence to eliminate the TILB based on geometric rotation effect. First, the differential equations that indicate the polarization evolution of the CSF element are derived, and the output error model is built based on the Jones matrix calculus. Then, an accurate search method is proposed to obtain the key parameters of the CSF, including the length of the cylindrical silica rod and the number of the curve spirals. The optimized results are 302 mm and 11, respectively. Moreover, an effective factor is proposed to analyze the elimination of the TILB, which should be greater than 7.42 to achieve the output error requirement that is not greater than 0.5%. Finally, temperature experiments are conducted to verify the feasibility of the elimination method. The results indicate that the output error caused by the TILB can be controlled less than 0.43% based on this elimination method within the range from −20 °C to 40 °C. PMID:28282953
Factors associated with aberrant imprint methylation and oligozoospermia
Kobayashi, Norio; Miyauchi, Naoko; Tatsuta, Nozomi; Kitamura, Akane; Okae, Hiroaki; Hiura, Hitoshi; Sato, Akiko; Utsunomiya, Takafumi; Yaegashi, Nobuo; Nakai, Kunihiko; Arima, Takahiro
2017-01-01
Disturbingly, the number of patients with oligozoospermia (low sperm count) has been gradually increasing in industrialized countries. Epigenetic alterations are believed to be involved in this condition. Recent studies have clarified that intrinsic and extrinsic factors can induce epigenetic transgenerational phenotypes through apparent reprogramming of the male germ line. Here we examined DNA methylation levels of 22 human imprinted loci in a total of 221 purified sperm samples from infertile couples and found methylation alterations in 24.8% of the patients. Structural equation model suggested that the cause of imprint methylation errors in sperm might have been environmental factors. More specifically, aberrant methylation and a particular lifestyle (current smoking, excess consumption of carbonated drinks) were associated with severe oligozoospermia, while aging probably affected this pathology indirectly through the accumulation of PCB in the patients. Next we examined the pregnancy outcomes for patients when the sperm had abnormal imprint methylation. The live-birth rate decreased and the miscarriage rate increased with the methylation errors. Our research will be useful for the prevention of methylation errors in sperm from infertile men, and sperm with normal imprint methylation might increase the safety of assisted reproduction technology (ART) by reducing methylation-induced diseases of children conceived via ART. PMID:28186187
Vibration-Induced Errors in MEMS Tuning Fork Gyroscopes with Imbalance.
Fang, Xiang; Dong, Linxi; Zhao, Wen-Sheng; Yan, Haixia; Teh, Kwok Siong; Wang, Gaofeng
2018-05-29
This paper discusses the vibration-induced error in non-ideal MEMS tuning fork gyroscopes (TFGs). Ideal TFGs which are thought to be immune to vibrations do not exist, and imbalance between two gyros of TFGs is an inevitable phenomenon. Three types of fabrication imperfections (i.e., stiffness imbalance, mass imbalance, and damping imbalance) are studied, considering different imbalance radios. We focus on the coupling types of two gyros of TFGs in both drive and sense directions, and the vibration sensitivities of four TFG designs with imbalance are simulated and compared. It is found that non-ideal TFGs with two gyros coupled both in drive and sense directions (type CC TFGs) are the most insensitive to vibrations with frequencies close to the TFG operating frequencies. However, sense-axis vibrations with in-phase resonant frequencies of a coupled gyros system result in severe error outputs to TFGs with two gyros coupled in the sense direction, which is mainly attributed to the sense capacitance nonlinearity. With increasing stiffness coupled ratio of the coupled gyros system, the sensitivity to vibrations with operating frequencies is cut down, yet sensitivity to vibrations with in-phase frequencies is amplified.
The effect of short ground vegetation on terrestrial laser scans at a local scale
NASA Astrophysics Data System (ADS)
Fan, Lei; Powrie, William; Smethurst, Joel; Atkinson, Peter M.; Einstein, Herbert
2014-09-01
Terrestrial laser scanning (TLS) can record a large amount of accurate topographical information with a high spatial accuracy over a relatively short period of time. These features suggest it is a useful tool for topographical survey and surface deformation detection. However, the use of TLS to survey a terrain surface is still challenging in the presence of dense ground vegetation. The bare ground surface may not be illuminated due to signal occlusion caused by vegetation. This paper investigates vegetation-induced elevation error in TLS surveys at a local scale and its spatial pattern. An open, relatively flat area vegetated with dense grass was surveyed repeatedly under several scan conditions. A total station was used to establish an accurate representation of the bare ground surface. Local-highest-point and local-lowest-point filters were applied to the point clouds acquired for deriving vegetation height and vegetation-induced elevation error, respectively. The effects of various factors (for example, vegetation height, edge effects, incidence angle, scan resolution and location) on the error caused by vegetation are discussed. The results are of use in the planning and interpretation of TLS surveys of vegetated areas.
Shadmehr, Reza; Ohminami, Shinya; Tsutsumi, Ryosuke; Shirota, Yuichiro; Shimizu, Takahiro; Tanaka, Nobuyuki; Terao, Yasuo; Tsuji, Shoji; Ugawa, Yoshikazu; Uchimura, Motoaki; Inoue, Masato; Kitazawa, Shigeru
2015-01-01
Cerebellar damage can profoundly impair human motor adaptation. For example, if reaching movements are perturbed abruptly, cerebellar damage impairs the ability to learn from the perturbation-induced errors. Interestingly, if the perturbation is imposed gradually over many trials, people with cerebellar damage may exhibit improved adaptation. However, this result is controversial, since the differential effects of gradual vs. abrupt protocols have not been observed in all studies. To examine this question, we recruited patients with pure cerebellar ataxia due to cerebellar cortical atrophy (n = 13) and asked them to reach to a target while viewing the scene through wedge prisms. The prisms were computer controlled, making it possible to impose the full perturbation abruptly in one trial, or build up the perturbation gradually over many trials. To control visual feedback, we employed shutter glasses that removed visual feedback during the reach, allowing us to measure trial-by-trial learning from error (termed error-sensitivity), and trial-by-trial decay of motor memory (termed forgetting). We found that the patients benefited significantly from the gradual protocol, improving their performance with respect to the abrupt protocol by exhibiting smaller errors during the exposure block, and producing larger aftereffects during the postexposure block. Trial-by-trial analysis suggested that this improvement was due to increased error-sensitivity in the gradual protocol. Therefore, cerebellar patients exhibited an improved ability to learn from error if they experienced those errors gradually. This improvement coincided with increased error-sensitivity and was present in both groups of subjects, suggesting that control of error-sensitivity may be spared despite cerebellar damage. PMID:26311179
Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A
2018-04-15
For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.
Suppression of vapor cell temperature error for spin-exchange-relaxation-free magnetometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng
2015-08-15
This paper presents a method to reduce the vapor cell temperature error of the spin-exchange-relaxation-free (SERF) magnetometer. The fluctuation of cell temperature can induce variations of the optical rotation angle, resulting in a scale factor error of the SERF magnetometer. In order to suppress this error, we employ the variation of the probe beam absorption to offset the variation of the optical rotation angle. The theoretical discussion of our method indicates that the scale factor error introduced by the fluctuation of the cell temperature could be suppressed by setting the optical depth close to one. In our experiment, we adjustmore » the probe frequency to obtain various optical depths and then measure the variation of scale factor with respect to the corresponding cell temperature changes. Our experimental results show a good agreement with our theoretical analysis. Under our experimental condition, the error has been reduced significantly compared with those when the probe wavelength is adjusted to maximize the probe signal. The cost of this method is the reduction of the scale factor of the magnetometer. However, according to our analysis, it only has minor effect on the sensitivity under proper operating parameters.« less
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.
Thyroid cancer following scalp irradiation: a reanalysis accounting for uncertainty in dosimetry.
Schafer, D W; Lubin, J H; Ron, E; Stovall, M; Carroll, R J
2001-09-01
In the 1940s and 1950s, over 20,000 children in Israel were treated for tinea capitis (scalp ringworm) by irradiation to induce epilation. Follow-up studies showed that the radiation exposure was associated with the development of malignant thyroid neoplasms. Despite this clear evidence of an effect, the magnitude of the dose-response relationship is much less clear because of probable errors in individual estimates of dose to the thyroid gland. Such errors have the potential to bias dose-response estimation, a potential that was not widely appreciated at the time of the original analyses. We revisit this issue, describing in detail how errors in dosimetry might occur, and we develop a new dose-response model that takes the uncertainties of the dosimetry into account. Our model for the uncertainty in dosimetry is a complex and new variant of the classical multiplicative Berkson error model, having components of classical multiplicative measurement error as well as missing data. Analysis of the tinea capitis data suggests that measurement error in the dosimetry has only a negligible effect on dose-response estimation and inference as well as on the modifying effect of age at exposure.
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.
Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.
Mehranian, Abolfazl; Zaidi, Habib
2015-04-01
Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasylkivska, Veronika S.; Huerta, Nicolas J.
Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog’s inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable withmore » respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.« less
Hubert, G; Regis, D; Cheminet, A; Gatti, M; Lacoste, V
2014-10-01
Particles originating from primary cosmic radiation, which hit the Earth's atmosphere give rise to a complex field of secondary particles. These particles include neutrons, protons, muons, pions, etc. Since the 1980s it has been known that terrestrial cosmic rays can penetrate the natural shielding of buildings, equipment and circuit package and induce soft errors in integrated circuits. Recently, research has shown that commercial static random access memories are now so small and sufficiently sensitive that single event upsets (SEUs) may be induced from the electronic stopping of a proton. With continued advancements in process size, this downward trend in sensitivity is expected to continue. Then, muon soft errors have been predicted for nano-electronics. This paper describes the effects in the specific cases such as neutron-, proton- and muon-induced SEU observed in complementary metal-oxide semiconductor. The results will allow investigating the technology node sensitivity along the scaling trend. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Mutagenesis during plant responses to UVB radiation.
Holá, M; Vágnerová, R; Angelis, K J
2015-08-01
We tested an idea that induced mutagenesis due to unrepaired DNA lesions, here the UV photoproducts, underlies the impact of UVB irradiation on plant phenotype. For this purpose we used protonemal culture of the moss Physcomitrella patens with 50% of apical cells, which mimics actively growing tissue, the most vulnerable stage for the induction of mutations. We measured the UVB mutation rate of various moss lines with defects in DNA repair (pplig4, ppku70, pprad50, ppmre11), and in selected clones resistant to 2-Fluoroadenine, which were mutated in the adenosine phosphotrasferase gene (APT), we analysed induced mutations by sequencing. In parallel we followed DNA break repair and removal of cyclobutane pyrimidine dimers with a half-life τ = 4 h 14 min determined by comet assay combined with UV dimer specific T4 endonuclease V. We show that UVB induces massive, sequence specific, error-prone bypass repair that is responsible for a high mutation rate owing to relatively slow, though error-free, removal of photoproducts by nucleotide excision repair (NER). Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
NASA Astrophysics Data System (ADS)
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
NASA Technical Reports Server (NTRS)
Brown, G. S.; Curry, W. J.
1977-01-01
The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.
Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A
2013-08-26
We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.
Short-term adaptation of the VOR: non-retinal-slip error signals and saccade substitution
NASA Technical Reports Server (NTRS)
Eggers, Sscott D Z.; De Pennington, Nick; Walker, Mark F.; Shelhamer, Mark; Zee, David S.
2003-01-01
We studied short-term (30 min) adaptation of the vestibulo-ocular reflex (VOR) in five normal humans using a "position error" stimulus without retinal image motion. Both before and after adaptation a velocity gain (peak slow-phase eye velocity/peak head velocity) and a position gain (total eye movement during chair rotation/amplitude of chair motion) were measured in darkness using search coils. The vestibular stimulus was a brief ( approximately 700 ms), 15 degrees chair rotation in darkness (peak velocity 43 degrees /s). To elicit adaptation, a straight-ahead fixation target disappeared during chair movement and when the chair stopped the target reappeared at a new location in front of the subject for gain-decrease (x0) adaptation, or 10 degrees opposite to chair motion for gain-increase (x1.67) adaptation. This position-error stimulus was effective at inducing VOR adaptation, though for gain-increase adaptation the primary strategy was to substitute augmenting saccades during rotation while for gain-decrease adaptation both corrective saccades and a decrease in slow-phase velocity occurred. Finally, the presence of the position-error signal alone, at the end of head rotation, without any attempt to fix upon it, was not sufficient to induce adaptation. Adaptation did occur, however, if the subject did make a saccade to the target after head rotation, or even if the subject paid attention to the new location of the target without actually looking at it.
Shi, Joy; Korsiak, Jill; Roth, Daniel E
2018-03-01
We aimed to demonstrate the use of jackknife residuals to take advantage of the longitudinal nature of available growth data in assessing potential biologically implausible values and outliers. Artificial errors were induced in 5% of length, weight, and head circumference measurements, measured on 1211 participants from the Maternal Vitamin D for Infant Growth (MDIG) trial from birth to 24 months of age. Each child's sex- and age-standardized z-score or raw measurements were regressed as a function of age in child-specific models. Each error responsible for a biologically implausible decrease between a consecutive pair of measurements was identified based on the higher of the two absolute values of jackknife residuals in each pair. In further analyses, outliers were identified as those values beyond fixed cutoffs of the jackknife residuals (e.g., greater than +5 or less than -5 in primary analyses). Kappa, sensitivity, and specificity were calculated over 1000 simulations to assess the ability of the jackknife residual method to detect induced errors and to compare these methods with the use of conditional growth percentiles and conventional cross-sectional methods. Among the induced errors that resulted in a biologically implausible decrease in measurement between two consecutive values, the jackknife residual method identified the correct value in 84.3%-91.5% of these instances when applied to the sex- and age-standardized z-scores, with kappa values ranging from 0.685 to 0.795. Sensitivity and specificity of the jackknife method were higher than those of the conditional growth percentile method, but specificity was lower than for conventional cross-sectional methods. Using jackknife residuals provides a simple method to identify biologically implausible values and outliers in longitudinal child growth data sets in which each child contributes at least 4 serial measurements. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
Investigation of advanced phase-shifting projected fringe profilometry techniques
NASA Astrophysics Data System (ADS)
Liu, Hongyu
1999-11-01
The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.; Fischl, Robert; Kam, Moshe
1992-01-01
This paper presents a strategy for dynamically monitoring digital controllers in the laboratory for susceptibility to electromagnetic disturbances that compromise control integrity. The integrity of digital control systems operating in harsh electromagnetic environments can be compromised by upsets caused by induced transient electrical signals. Digital system upset is a functional error mode that involves no component damage, can occur simultaneously in all channels of a redundant control computer, and is software dependent. The motivation for this work is the need to develop tools and techniques that can be used in the laboratory to validate and/or certify critical aircraft controllers operating in electromagnetically adverse environments that result from lightning, high-intensity radiated fields (HIRF), and nuclear electromagnetic pulses (NEMP). The detection strategy presented in this paper provides dynamic monitoring of a given control computer for degraded functional integrity resulting from redundancy management errors, control calculation errors, and control correctness/effectiveness errors. In particular, this paper discusses the use of Kalman filtering, data fusion, and statistical decision theory in monitoring a given digital controller for control calculation errors.
Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hunsberger, Randolph J
This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less
Observability Analysis of a MEMS INS/GPS Integration System with Gyroscope G-Sensitivity Errors
Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Tang, Kanghua; Luo, Bing
2014-01-01
Gyroscopes based on micro-electromechanical system (MEMS) technology suffer in high-dynamic applications due to obvious g-sensitivity errors. These errors can induce large biases in the gyroscope, which can directly affect the accuracy of attitude estimation in the integration of the inertial navigation system (INS) and the Global Positioning System (GPS). The observability determines the existence of solutions for compensating them. In this paper, we investigate the observability of the INS/GPS system with consideration of the g-sensitivity errors. In terms of two types of g-sensitivity coefficients matrix, we add them as estimated states to the Kalman filter and analyze the observability of three or nine elements of the coefficient matrix respectively. A global observable condition of the system is presented and validated. Experimental results indicate that all the estimated states, which include position, velocity, attitude, gyro and accelerometer bias, and g-sensitivity coefficients, could be made observable by maneuvering based on the conditions. Compared with the integration system without compensation for the g-sensitivity errors, the attitude accuracy is raised obviously. PMID:25171122
Observability analysis of a MEMS INS/GPS integration system with gyroscope G-sensitivity errors.
Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Tang, Kanghua; Luo, Bing
2014-08-28
Gyroscopes based on micro-electromechanical system (MEMS) technology suffer in high-dynamic applications due to obvious g-sensitivity errors. These errors can induce large biases in the gyroscope, which can directly affect the accuracy of attitude estimation in the integration of the inertial navigation system (INS) and the Global Positioning System (GPS). The observability determines the existence of solutions for compensating them. In this paper, we investigate the observability of the INS/GPS system with consideration of the g-sensitivity errors. In terms of two types of g-sensitivity coefficients matrix, we add them as estimated states to the Kalman filter and analyze the observability of three or nine elements of the coefficient matrix respectively. A global observable condition of the system is presented and validated. Experimental results indicate that all the estimated states, which include position, velocity, attitude, gyro and accelerometer bias, and g-sensitivity coefficients, could be made observable by maneuvering based on the conditions. Compared with the integration system without compensation for the g-sensitivity errors, the attitude accuracy is raised obviously.
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less
Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno
2016-01-01
Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours. PMID:26963919
Arrighi, Pieranna; Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno; Andre, Paolo
2016-01-01
Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours.
NASA Astrophysics Data System (ADS)
Chen, Shanyong; Li, Shengyi; Wang, Guilin
2014-11-01
The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.
Ka-Band Phased Array System Characterization
NASA Technical Reports Server (NTRS)
Acosta, R.; Johnson, S.; Sands, O.; Lambert, K.
2001-01-01
Phased Array Antennas (PAAs) using patch-radiating elements are projected to transmit data at rates several orders of magnitude higher than currently offered with reflector-based systems. However, there are a number of potential sources of degradation in the Bit Error Rate (BER) performance of the communications link that are unique to PAA-based links. Short spacing of radiating elements can induce mutual coupling between radiating elements, long spacing can induce grating lobes, modulo 2 pi phase errors can add to Inter Symbol Interference (ISI), phase shifters and power divider network introduce losses into the system. This paper describes efforts underway to test and evaluate the effects of the performance degrading features of phased-array antennas when used in a high data rate modulation link. The tests and evaluations described here uncover the interaction between the electrical characteristics of a PAA and the BER performance of a communication link.
NASA Technical Reports Server (NTRS)
Pallix, Joan B.; Copeland, Richard A.; Arnold, James O. (Technical Monitor)
1995-01-01
Advanced laser-based diagnostics have been developed to examine catalytic effects and atom/surface interactions on thermal protection materials. This study establishes the feasibility of using laser-induced fluorescence for detection of O and N atom loss in a diffusion tube to measure surface catalytic activity. The experimental apparatus is versatile in that it allows fluorescence detection to be used for measuring species selective recombination coefficients as well as diffusion tube and microwave discharge diagnostics. Many of the potential sources of error in measuring atom recombination coefficients by this method have been identified and taken into account. These include scattered light, detector saturation, sample surface cleanliness, reactor design, gas pressure and composition, and selectivity of the laser probe. Recombination coefficients and their associated errors are reported for N and O atoms on a quartz surface at room temperature.
Gray, Rob; Orn, Anders; Woodman, Tim
2017-02-01
Are pressure-induced performance errors in experts associated with novice-like skill execution (as predicted by reinvestment/conscious processing theories) or expert execution toward a result that the performer typically intends to avoid (as predicted by ironic processes theory)? The present study directly compared these predictions using a baseball pitching task with two groups of experienced pitchers. One group was shown only their target, while the other group was shown the target and an ironic (avoid) zone. Both groups demonstrated significantly fewer target hits under pressure. For the target-only group, this was accompanied by significant changes in expertise-related kinematic variables. In the ironic group, the number of pitches thrown in the ironic zone was significantly higher under pressure, and there were no significant changes in kinematics. These results suggest that information about an opponent can influence the mechanisms underlying pressure-induced performance errors.
Kam, Winnie W Y; Lake, Vanessa; Banos, Connie; Davies, Justin; Banati, Richard
2013-05-30
Quantitative polymerase chain reaction (qPCR) has been widely used to quantify changes in gene copy numbers after radiation exposure. Here, we show that gamma irradiation ranging from 10 to 100 Gy of cells and cell-free DNA samples significantly affects the measured qPCR yield, due to radiation-induced fragmentation of the DNA template and, therefore, introduces errors into the estimation of gene copy numbers. The radiation-induced DNA fragmentation and, thus, measured qPCR yield varies with temperature not only in living cells, but also in isolated DNA irradiated under cell-free conditions. In summary, the variability in measured qPCR yield from irradiated samples introduces a significant error into the estimation of both mitochondrial and nuclear gene copy numbers and may give spurious evidence for polyploidization.
Qiao, Jie; Papa, J.; Liu, X.
2015-09-24
Monolithic large-scale diffraction gratings are desired to improve the performance of high-energy laser systems and scale them to higher energy, but the surface deformation of these diffraction gratings induce spatio-temporal coupling that is detrimental to the focusability and compressibility of the output pulse. A new deformable-grating-based pulse compressor architecture with optimized actuator positions has been designed to correct the spatial and temporal aberrations induced by grating wavefront errors. An integrated optical model has been built to analyze the effect of grating wavefront errors on the spatio-temporal performance of a compressor based on four deformable gratings. Moreover, a 1.5-meter deformable gratingmore » has been optimized using an integrated finite-element-analysis and genetic-optimization model, leading to spatio-temporal performance similar to the baseline design with ideal gratings.« less
Nimodipine alters acquisition of a visual discrimination task in chicks.
Deyo, R; Panksepp, J; Conner, R L
1990-03-01
Chicks 5 days old received intraperitoneal injections of nimodipine 30 min before training on either a visual discrimination task (0, 0.5, 1.0, or 5.0 mg/kg) or a test of separation-induced distress vocalizations (0, 0.5, or 2.5 mg/kg). Chicks receiving 1.0 mg/kg nimodipine made significantly fewer visual discrimination errors than vehicle controls by trials 41-60, but did not differ from controls 24 h later. Chicks in the 5 mg/kg group made significantly more errors when compared to controls both during acquisition of the task and during retention. Nimodipine did not alter separation-induced distress vocalizations at any of the doses tested, suggesting that nimodipine's effects on learning cannot be attributed to a reduction in separation distress. These data indicate that nimodipine's facilitation of learning in young subjects is dose dependent, but nimodipine failed to enhance retention.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
Target Uncertainty Mediates Sensorimotor Error Correction
Vijayakumar, Sethu; Wolpert, Daniel M.
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323
Target Uncertainty Mediates Sensorimotor Error Correction.
Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.
Preparation-induced errors in EPR dosimetry of enamel: pre- and post-crushing sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haskell, E.H.; Hayes, R.B.; Kenner, G.H.
1996-01-01
Errors in dose estimation as a function of grain size for tooth enamel has been previously shown for beta irradiation after crushing. We tested the effect of gamma radiation applied to specimens before and after crushing. We extend the previous work in that we found that post-crushing irradiation altered the slope of the dose-response curve of the hydroxyapatite signal and produced a grain-size dependent offset. No changes in the slope of the dose-response curve were seen in enamel caps irradiated before crushing.
SIMulation of Medication Error induced by Clinical Trial drug labeling: the SIMME-CT study.
Dollinger, Cecile; Schwiertz, Vérane; Sarfati, Laura; Gourc-Berthod, Chloé; Guédat, Marie-Gabrielle; Alloux, Céline; Vantard, Nicolas; Gauthier, Noémie; He, Sophie; Kiouris, Elena; Caffin, Anne-Gaelle; Bernard, Delphine; Ranchon, Florence; Rioufol, Catherine
2016-06-01
To assess the impact of investigational drug labels on the risk of medication error in drug dispensing. A simulation-based learning program focusing on investigational drug dispensing was conducted. The study was undertaken in an Investigational Drugs Dispensing Unit of a University Hospital of Lyon, France. Sixty-three pharmacy workers (pharmacists, residents, technicians or students) were enrolled. Ten risk factors were selected concerning label information or the risk of confusion with another clinical trial. Each risk factor was scored independently out of 5: the higher the score, the greater the risk of error. From 400 labels analyzed, two groups were selected for the dispensing simulation: 27 labels with high risk (score ≥3) and 27 with low risk (score ≤2). Each question in the learning program was displayed as a simulated clinical trial prescription. Medication error was defined as at least one erroneous answer (i.e. error in drug dispensing). For each question, response times were collected. High-risk investigational drug labels correlated with medication error and slower response time. Error rates were significantly 5.5-fold higher for high-risk series. Error frequency was not significantly affected by occupational category or experience in clinical trials. SIMME-CT is the first simulation-based learning tool to focus on investigational drug labels as a risk factor for medication error. SIMME-CT was also used as a training tool for staff involved in clinical research, to develop medication error risk awareness and to validate competence in continuing medical education. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
Punishment sensitivity modulates the processing of negative feedback but not error-induced learning.
Unger, Kerstin; Heintz, Sonja; Kray, Jutta
2012-01-01
Accumulating evidence suggests that individual differences in punishment and reward sensitivity are associated with functional alterations in neural systems underlying error and feedback processing. In particular, individuals highly sensitive to punishment have been found to be characterized by larger mediofrontal error signals as reflected in the error negativity/error-related negativity (Ne/ERN) and the feedback-related negativity (FRN). By contrast, reward sensitivity has been shown to relate to the error positivity (Pe). Given that Ne/ERN, FRN, and Pe have been functionally linked to flexible behavioral adaptation, the aim of the present research was to examine how these electrophysiological reflections of error and feedback processing vary as a function of punishment and reward sensitivity during reinforcement learning. We applied a probabilistic learning task that involved three different conditions of feedback validity (100%, 80%, and 50%). In contrast to prior studies using response competition tasks, we did not find reliable correlations between punishment sensitivity and the Ne/ERN. Instead, higher punishment sensitivity predicted larger FRN amplitudes, irrespective of feedback validity. Moreover, higher reward sensitivity was associated with a larger Pe. However, only reward sensitivity was related to better overall learning performance and higher post-error accuracy, whereas highly punishment sensitive participants showed impaired learning performance, suggesting that larger negative feedback-related error signals were not beneficial for learning or even reflected maladaptive information processing in these individuals. Thus, although our findings indicate that individual differences in reward and punishment sensitivity are related to electrophysiological correlates of error and feedback processing, we found less evidence for influences of these personality characteristics on the relation between performance monitoring and feedback-based learning.
Patterson, Mark E; Pace, Heather A; Fincham, Jack E
2013-09-01
Although error-reporting systems enable hospitals to accurately track safety climate through the identification of adverse events, these systems may be underused within a work climate of poor communication. The objective of this analysis is to identify the extent to which perceived communication climate among hospital pharmacists impacts medical error reporting rates. This cross-sectional study used survey responses from more than 5000 pharmacists responding to the 2010 Hospital Survey on Patient Safety Culture (HSOPSC). Two composite scores were constructed for "communication openness" and "feedback and about error," respectively. Error reporting frequency was defined from the survey question, "In the past 12 months, how many event reports have you filled out and submitted?" Multivariable logistic regressions were used to estimate the likelihood of medical error reporting conditional upon communication openness or feedback levels, controlling for pharmacist years of experience, hospital geographic region, and ownership status. Pharmacists with higher communication openness scores compared with lower scores were 40% more likely to have filed or submitted a medical error report in the past 12 months (OR, 1.4; 95% CI, 1.1-1.7; P = 0.004). In contrast, pharmacists with higher communication feedback scores were not any more likely than those with lower scores to have filed or submitted a medical report in the past 12 months (OR, 1.0; 95% CI, 0.8-1.3; P = 0.97). Hospital work climates that encourage pharmacists to freely communicate about problems related to patient safety is conducive to medical error reporting. The presence of feedback infrastructures about error may not be sufficient to induce error-reporting behavior.
Rotational wind indicator enhances control of rotated displays
NASA Technical Reports Server (NTRS)
Cunningham, H. A.; Pavel, Misha
1991-01-01
Rotation by 108 deg of the spatial mapping between a visual display and a manual input device produces large spatial errors in a discrete aiming task. These errors are not easily corrected by voluntary mental effort, but the central nervous system does adapt gradually to the new mapping. Bernotat (1970) showed that adding true hand position to a 90 deg rotated display improved performance of a compensatory tracking task, but tracking error rose again upon removal of the explicit cue. This suggests that the explicit error signal did not induce changes in the neural mapping, but rather allowed the operator to reduce tracking error using a higher mental strategy. In this report, we describe an explicit visual display enhancement applied to a 108 deg rotated discrete aiming task. A 'wind indicator' corresponding to the effect of the mapping rotation is displayed on the operator-controlled cursor. The human operator is instructed to oppose the virtual force represented by the indicator, as one would do if flying an airplane in a crosswind. This enhancement reduces spatial aiming error in the first 10 minutes of practice by an average of 70 percent when compared to a no enhancement control condition. Moreover, it produces adaptation aftereffect, which is evidence of learning by neural adaptation rather than by mental strategy. Finally, aiming error does not rise upon removal of the explicit cue.
Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.
Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D
2018-04-07
We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos
2018-07-01
In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAOs). Using analytic expressions and results from 1000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAOs, and the cosmological information in them. We find that (a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; (b) photo-z errors decrease the smearing of BAOs due to non-linear redshift-space distortions (RSDs) by giving less weight to line-of-sight modes; and (c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.
NASA Astrophysics Data System (ADS)
Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos
2018-04-01
In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAO). Using analytic expressions and results from 1 000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAO, and the cosmological information in them. We find that: a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; b) photo-z errors decrease the smearing of BAO due to non-linear redshift-space distortions (RSD) by giving less weight to line-of-sight modes; and c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.
Monjo, Florian; Forestier, Nicolas
2018-04-01
This study was designed to explore the effects of intrafusal thixotropy, a property affecting muscle spindle sensitivity, on the sense of force. For this purpose, psychophysical measurements of force perception were performed using an isometric force matching paradigm of elbow flexors consisting of matching different force magnitudes (5, 10 and 20% of subjects' maximal voluntary force). We investigated participants' capacity to match these forces after their indicator arm had undergone voluntary isometric conditioning contractions known to alter spindle thixotropy, i.e., contractions performed at long ('hold long') or short muscle lengths ('hold short'). In parallel, their reference arm was conditioned at the intermediate muscle length ('hold-test') at which the matchings were performed. The thixotropy hypothesis predicts that estimation errors should only be observed at low force levels (up to 10% of the maximal voluntary force) with overestimation of the forces produced following 'hold short' conditioning and underestimation following 'hold long' conditioning. We found the complete opposite, especially following 'hold-short' conditioning where subjects underestimated the force they generated with similar relative error magnitudes across force levels. In a second experiment, we tested the hypothesis that estimation errors depended on the degree of afferent-induced facilitation using the Kohnstamm phenomenon as a probe of motor pathway excitability. Because the stronger post-effects were observed following 'hold-short' conditioning, it appears that the conditioning-induced excitation of spindle afferents leads to force misjudgments by introducing a decoupling between the central effort and the cortical motor outputs.
Kushniruk, A; Nohr, C; Borycki, E
2016-11-10
A wide range of human factors approaches have been developed and adapted to healthcare for detecting and mitigating negative unexpected consequences associated with technology in healthcare (i.e. technology-induced errors). However, greater knowledge and wider dissemination of human factors methods is needed to ensure more usable and safer health information technology (IT) systems. This paper reports on work done by the IMIA Human Factors Working Group and discusses some successful approaches that have been applied in using human factors to mitigate negative unintended consequences of health IT. The paper addresses challenges in bringing human factors approaches into mainstream health IT development. A framework for bringing human factors into the improvement of health IT is described that involves a multi-layered systematic approach to detecting technology-induced errors at all stages of a IT system development life cycle (SDLC). Such an approach has been shown to be needed and can lead to reduced risks associated with the release of health IT systems into live use with mitigation of risks of negative unintended consequences. Negative unintended consequences of the introduction of IT into healthcare (i.e. potential for technology-induced errors) continue to be reported. It is concluded that methods and approaches from the human factors and usability engineering literatures need to be more widely applied, both in the vendor community and in local and regional hospital and healthcare settings. This will require greater efforts at dissemination and knowledge translation, as well as greater interaction between the academic and vendor communities.
NASA Astrophysics Data System (ADS)
Huq, Sadiq; De Roo, Frederik; Foken, Thomas; Mauder, Matthias
2017-10-01
The Campbell CSAT3 sonic anemometer is one of the most popular instruments for turbulence measurements in basic micrometeorological research and ecological applications. While measurement uncertainty has been characterized by field experiments and wind-tunnel studies in the past, there are conflicting estimates, which motivated us to conduct a numerical experiment using large-eddy simulation to evaluate the probe-induced flow distortion of the CSAT3 anemometer under controlled conditions, and with exact knowledge of the undisturbed flow. As opposed to wind-tunnel studies, we imposed oscillations in both the vertical and horizontal velocity components at the distinct frequencies and amplitudes found in typical turbulence spectra in the surface layer. The resulting flow-distortion errors for the standard deviations of the vertical velocity component range from 3 to 7%, and from 1 to 3% for the horizontal velocity component, depending on the azimuth angle. The magnitude of these errors is almost independent of the frequency of wind speed fluctuations, provided the amplitude is typical for surface-layer turbulence. A comparison of the corrections for transducer shadowing proposed by both Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol 155:371-395, 2015) show that both methods compensate for a larger part of the observed error, but do not sufficiently account for the azimuth dependency. Further numerical simulations could be conducted in the future to characterize the flow distortion induced by other existing types of sonic anemometers for the purposes of optimizing their geometry.
Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry
NASA Technical Reports Server (NTRS)
Brown, Denise L.; Munoz, Jean-Philippe; Gay, Robert
2011-01-01
The EFT-1 mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on onboard altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. There are four primary error sources impacting the sensed pressure: sensor errors, Analog to Digital conversion errors, aerodynamic errors, and atmosphere modeling errors. This last error source is induced by the conversion from pressure to altitude in the vehicle flight software, which requires an atmosphere model such as the US Standard 1976 Atmosphere model. There are several secondary error sources as well, such as waves, tides, and latencies in data transmission. Typically, for error budget calculations it is assumed that all error sources are independent, normally distributed variables. Thus, the initial approach to developing the EFT-1 barometric altimeter altitude error budget was to create an itemized error budget under these assumptions. This budget was to be verified by simulation using high fidelity models of the vehicle hardware and software. The simulation barometric altimeter model includes hardware error sources and a data-driven model of the aerodynamic errors expected to impact the pressure in the midbay compartment in which the sensors are located. The aerodynamic model includes the pressure difference between the midbay compartment and the free stream pressure as a function of altitude, oscillations in sensed pressure due to wake effects, and an acoustics model capturing fluctuations in pressure due to motion of the passive vents separating the barometric altimeters from the outside of the vehicle.
Riddell, Nina; Faou, Pierre; Murphy, Melanie; Giummarra, Loretta; Downs, Rachael A.; Rajapaksha, Harinda
2017-01-01
Purpose Microarray and RNA sequencing studies in the chick model of early optically induced refractive error have implicated thousands of genes, many of which have also been linked to ocular pathologies in humans, including age-related macular degeneration (AMD), choroidal neovascularization, glaucoma, and cataract. These findings highlight the potential relevance of the chick model to understanding both refractive error development and the progression to secondary pathological complications. The present study aimed to determine whether proteomic responses to early optical defocus in the chick share similarities with these transcriptome-level changes, particularly in terms of dysregulation of pathology-related molecular processes. Methods Chicks were assigned to a lens condition (monocular +10 D [diopters] to induce hyperopia, −10 D to induce myopia, or no lens) on post-hatch day 5. Biometric measures were collected following a further 6 h and 48 h of rearing. The retina/RPE was then removed and prepared for liquid chromatography-electrospray ionization-tandem mass spectrometry (LC-ESI-MS/MS) on an LTQ-Orbitrap Elite. Raw data were processed using MaxQuant, and differentially abundant proteins were identified using moderated t tests (fold change ≥1.5, Benjamini-Hochberg adjusted p<0.05). These differentially abundant proteins were compared with the genes and proteins implicated in previous exploratory transcriptome and proteomic studies of refractive error, as well as the genes and proteins linked to the ocular pathologies listed above for which myopia or hyperopia are risk factors. Finally, gene set enrichment analysis (GSEA) was used to assess whether gene sets from the Human Phenotype Ontology database were enriched in the lens groups relative to the no lens groups, and at the top or bottom of the protein data ranked by Spearman’s correlation with refraction at 6 and 48 h. Results Refractive errors of −2.63 D ± 0.31 D (mean ± standard error, SE) and 3.90 D ± 0.37 D were evident in the negative and positive lens groups, respectively, at 6 h. By 48 h, refractive compensation to both lens types was almost complete (negative lens −9.70 D ± 0.41 D, positive lens 7.70 D ± 0.44 D). More than 140 differentially abundant proteins were identified in each lens group relative to the no lens controls at both time points. No proteins were differentially abundant between the negative and positive lens groups at 6 h, and 13 were differentially abundant at 48 h. As there was substantial overlap in the proteins implicated across the six comparisons, a total of 390 differentially abundant proteins were identified. Sixty-five of these 390 proteins had previously been implicated in transcriptome studies of refractive error animal models, and 42 had previously been associated with AMD, choroidal neovascularization, glaucoma, and/or cataract in humans. The overlap of differentially abundant proteins with AMD-associated genes and proteins was statistically significant for all conditions (Benjamini-Hochberg adjusted p<0.05), with over-representation analysis implicating ontologies related to oxidative stress, cholesterol homeostasis, and melanin biosynthesis. GSEA identified significant enrichment of genes associated with abnormal electroretinogram, photophobia, and nyctalopia phenotypes in the proteins negatively correlated with ocular refraction across the lens groups at 6 h. The implicated proteins were primarily linked to photoreceptor dystrophies and mitochondrial disorders in humans. Conclusions Optical defocus in the chicks induces rapid changes in the abundance of many proteins in the retina/RPE that have previously been linked to inherited and age-related ocular pathologies in humans. Similar changes have been identified in a meta-analysis of chick refractive error transcriptome studies, highlighting the chick as a model for the study of optically induced stress with possible relevance to understanding the development of a range of pathological states in humans. PMID:29259393
NASA Astrophysics Data System (ADS)
Pieper, Michael
Accurate estimation or retrieval of surface emissivity spectra from long-wave infrared (LWIR) or Thermal Infrared (TIR) hyperspectral imaging data acquired by airborne or space-borne sensors is necessary for many scientific and defense applications. The at-aperture radiance measured by the sensor is a function of the ground emissivity and temperature, modified by the atmosphere. Thus the emissivity retrieval process consists of two interwoven steps: atmospheric compensation (AC) to retrieve the ground radiance from the measured at-aperture radiance and temperature-emissivity separation (TES) to separate the temperature and emissivity from the ground radiance. In-scene AC (ISAC) algorithms use blackbody-like materials in the scene, which have a linear relationship between their ground radiances and at-aperture radiances determined by the atmospheric transmission and upwelling radiance. Using a clear reference channel to estimate the ground radiance, a linear fitting of the at-aperture radiance and estimated ground radiance is done to estimate the atmospheric parameters. TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the sharp features added by the atmosphere. The ground temperature and emissivity are found by finding the temperature that provides the smoothest emissivity estimate. In this thesis we develop models to investigate the sensitivity of AC and TES to the basic assumptions enabling their performance. ISAC assumes that there are perfect blackbody pixels in a scene and that there is a clear channel, which is never the case. The developed ISAC model explains how the quality of blackbody-like pixels affect the shape of atmospheric estimates and the clear channel assumption affects their magnitude. Emissivity spectra for solids usually have some roughness. The TES model identifies four sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise and wavelength calibration. The ways these errors interact determines the overall TES performance. Since the AC and TES processes are interwoven, any errors in AC are transferred to TES and the final temperature and emissivity estimates. Combining the two models, shape errors caused by the blackbody assumption are transferred to the emissivity estimates, where magnitude errors from the clear channel assumption are compensated by TES temperature induced emissivity errors. The ability for the temperature induced error to compensate for such atmospheric errors makes it difficult to determine the correct atmospheric parameters for a scene. With these models we are able to determine the expected quality of estimated emissivity spectra based on the quality of blackbody-like materials on the ground, the emissivity of the materials being searched for, and the properties of the sensor. The quality of material emissivity spectra is a key factor in determining detection performance for a material in a scene.
Stereotype susceptibility narrows the gender gap in imagined self-rotation performance.
Wraga, Maryjane; Duncan, Lauren; Jacobs, Emily C; Helt, Molly; Church, Jessica
2006-10-01
Three studies examined the impact of stereotype messages on men's and women's performance of a mental rotation task involving imagined self-rotations. Experiment 1 established baseline differences between men and women; women made 12% more errors than did men. Experiment 2 found that exposure to a positive stereotype message enhanced women's performance in comparison with that of another group of women who received neutral information. In Experiment 3, men who were exposed to the same stereotype message emphasizing a female advantage made more errors than did male controls, and the magnitude of error was similar to that for women from Experiment 1. The results suggest that the gender gap in mental rotation performance is partially caused by experiential factors, particularly those induced by sociocultural stereotypes.
Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Castano, Diego J.
1987-01-01
Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less
The effect of the dynamic wet troposphere on radio interferometric measurements
NASA Technical Reports Server (NTRS)
Treuhaft, R. N.; Lanyi, G. E.
1987-01-01
A statistical model of water vapor fluctuations is used to describe the effect of the dynamic wet troposphere on radio interferometric measurements. It is assumed that the spatial structure of refractivity is approximated by Kolmogorov turbulence theory, and that the temporal fluctuations are caused by spatial patterns moved over a site by the wind, and these assumptions are examined for the VLBI delay and delay rate observables. The results suggest that the delay rate measurement error is usually dominated by water vapor fluctuations, and water vapor induced VLBI parameter errors and correlations are determined as a function of the delay observable errors. A method is proposed for including the water vapor fluctuations in the parameter estimation method to obtain improved parameter estimates and parameter covariances.
Inducing Multilingual Text Analysis Tools via Robust Projection across Aligned Corpora
2001-01-01
monolingual dictionary - derived list of canonical roots would resolve ambiguity re- garding which is the appropriate target. � Many of the errors are...system and set of algorithms for automati- cally inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity...corpora has tended to focus on their use in translation model training for MT rather than on monolingual applications. One exception is bilin- gual parsing
EEG Frequency Changes Prior to Making Errors in an Easy Stroop Task
Atchley, Rachel; Klee, Daniel; Oken, Barry
2017-01-01
Background: Mind-wandering is a form of off-task attention that has been associated with negative affect and rumination. The goal of this study was to assess potential electroencephalographic markers of task-unrelated thought, or mind-wandering state, as related to error rates during a specialized cognitive task. We used EEG to record frontal frequency band activity while participants completed a Stroop task that was modified to induce boredom, task-unrelated thought, and therefore mind-wandering. Methods: A convenience sample of 27 older adults (50–80 years) completed a computerized Stroop matching task. Half of the Stroop trials were congruent (word/color match), and the other half were incongruent (mismatched). Behavioral data and EEG recordings were assessed. EEG analysis focused on the 1-s epochs prior to stimulus presentation in order to compare trials followed by correct versus incorrect responses. Results: Participants made errors on 9% of incongruent trials. There were no errors on congruent trials. There was a decrease in alpha and theta band activity during the epochs followed by error responses. Conclusion: Although replication of these results is necessary, these findings suggest that potential mind-wandering, as evidenced by errors, can be characterized by a decrease in alpha and theta activity compared to on-task, accurate performance periods. PMID:29163101
Hasebe, Satoshi; Nonaka, Fumitaka; Ohtsuki, Hiroshi
2005-11-01
A model of the cross-link interactions between accommodation and convergence predicted that heterophoria can induce large accommodation errors (Schor, Ophthalmic Physiol. Opt. 1999;19:134-150). In 99 consecutive patients with intermittent tropia or decompensated phoria, we tested these interactions by comparing their accommodative responses to a 2.50-D target under binocular fused conditions (BFC) and monocular occluded conditions (MOC). The accommodative response in BFC frequently differed from that in MOC. The magnitude of the accommodative errors in BFC, ranging from an accommodative lag of 1.80 D (in an esophoric patient) to an accommodative lead of 1.56 D (in an exophoric patient), was correlated with distance heterophoria and uncorrected refractive errors. These results indicate that heterophoria affects the accuracy of accommodation to various degrees, as the model predicted, and that an accommodative error larger than the depth of focus of the eye occurs in exchange for binocular single vision in some heterophoric patients.
Bao, Guzhi; Wickenbrock, Arne; Rochester, Simon; Zhang, Weiping; Budker, Dmitry
2018-01-19
The nonlinear Zeeman effect can induce splitting and asymmetries of magnetic-resonance lines in the geophysical magnetic-field range. This is a major source of "heading error" for scalar atomic magnetometers. We demonstrate a method to suppress the nonlinear Zeeman effect and heading error based on spin locking. In an all-optical synchronously pumped magnetometer with separate pump and probe beams, we apply a radio-frequency field which is in phase with the precessing magnetization. This results in the collapse of the multicomponent asymmetric magnetic-resonance line with ∼100 Hz width in the Earth-field range into a single peak with a width of 22 Hz, whose position is largely independent of the orientation of the sensor within a range of orientation angles. The technique is expected to be broadly applicable in practical magnetometry, potentially boosting the sensitivity and accuracy of Earth-surveying magnetometers by increasing the magnetic-resonance amplitude, decreasing its width, and removing the important and limiting heading-error systematic.
Tidal Models In A New Era of Satellite Gravimetry
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Rowlings, David D.; Edbert, G. D.; Chao, Benjamin F. (Technical Monitor)
2002-01-01
The high precision gravity measurements to be made by recently launched (and recently approved) satellites place new demands on models of Earth, atmospheric, and oceanic tides. The latter is the most problematic. The ocean tides induce variations in the Earth's geoid by amounts that far exceed the new satellite sensitivities, and tidal models must be used to correct for this. Two methods are used here to determine the standard errors in current ocean tide models. At long wavelengths these errors exceed the sensitivity of the GRACE mission. Tidal errors will not prevent the new satellite missions from improving our knowledge of the geopotential by orders of magnitude, but the errors may well contaminate GRACE estimates of temporal variations in gravity. Solar tides are especially problematic because of their long alias periods. The satellite data may be used to improve tidal models once a sufficiently long time series is obtained. Improvements in the long-wavelength components of lunar tides are especially promising.
Tedja, Milly S; Wojciechowski, Robert; Hysi, Pirro G; Eriksson, Nicholas; Furlotte, Nicholas A; Verhoeven, Virginie J M; Iglesias, Adriana I; Meester-Smoor, Magda A; Tompson, Stuart W; Fan, Qiao; Khawaja, Anthony P; Cheng, Ching-Yu; Höhn, René; Yamashiro, Kenji; Wenocur, Adam; Grazal, Clare; Haller, Toomas; Metspalu, Andres; Wedenoja, Juho; Jonas, Jost B; Wang, Ya Xing; Xie, Jing; Mitchell, Paul; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Paterson, Andrew D; Hosseini, S Mohsen; Shah, Rupal L; Williams, Cathy; Teo, Yik Ying; Tham, Yih Chung; Gupta, Preeti; Zhao, Wanting; Shi, Yuan; Saw, Woei-Yuh; Tai, E-Shyong; Sim, Xue Ling; Huffman, Jennifer E; Polašek, Ozren; Hayward, Caroline; Bencic, Goran; Rudan, Igor; Wilson, James F; Joshi, Peter K; Tsujikawa, Akitaka; Matsuda, Fumihiko; Whisenhunt, Kristina N; Zeller, Tanja; van der Spek, Peter J; Haak, Roxanna; Meijers-Heijboer, Hanne; van Leeuwen, Elisabeth M; Iyengar, Sudha K; Lass, Jonathan H; Hofman, Albert; Rivadeneira, Fernando; Uitterlinden, André G; Vingerling, Johannes R; Lehtimäki, Terho; Raitakari, Olli T; Biino, Ginevra; Concas, Maria Pina; Schwantes-An, Tae-Hwi; Igo, Robert P; Cuellar-Partida, Gabriel; Martin, Nicholas G; Craig, Jamie E; Gharahkhani, Puya; Williams, Katie M; Nag, Abhishek; Rahi, Jugnoo S; Cumberland, Phillippa M; Delcourt, Cécile; Bellenguez, Céline; Ried, Janina S; Bergen, Arthur A; Meitinger, Thomas; Gieger, Christian; Wong, Tien Yin; Hewitt, Alex W; Mackey, David A; Simpson, Claire L; Pfeiffer, Norbert; Pärssinen, Olavi; Baird, Paul N; Vitart, Veronique; Amin, Najaf; van Duijn, Cornelia M; Bailey-Wilson, Joan E; Young, Terri L; Saw, Seang-Mei; Stambolian, Dwight; MacGregor, Stuart; Guggenheim, Jeremy A; Tung, Joyce Y; Hammond, Christopher J; Klaver, Caroline C W
2018-06-01
Refractive errors, including myopia, are the most frequent eye disorders worldwide and an increasingly common cause of blindness. This genome-wide association meta-analysis in 160,420 participants and replication in 95,505 participants increased the number of established independent signals from 37 to 161 and showed high genetic correlation between Europeans and Asians (>0.78). Expression experiments and comprehensive in silico analyses identified retinal cell physiology and light processing as prominent mechanisms, and also identified functional contributions to refractive-error development in all cell types of the neurosensory retina, retinal pigment epithelium, vascular endothelium and extracellular matrix. Newly identified genes implicate novel mechanisms such as rod-and-cone bipolar synaptic neurotransmission, anterior-segment morphology and angiogenesis. Thirty-one loci resided in or near regions transcribing small RNAs, thus suggesting a role for post-transcriptional regulation. Our results support the notion that refractive errors are caused by a light-dependent retina-to-sclera signaling cascade and delineate potential pathobiological molecular drivers.
Precision of spiral-bevel gears
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1982-01-01
The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry 1 gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion.
Multiple levels of bilingual language control: evidence from language intrusions in reading aloud.
Gollan, Tamar H; Schotter, Elizabeth R; Gomez, Joanne; Murillo, Mayra; Rayner, Keith
2014-02-01
Bilinguals rarely produce words in an unintended language. However, we induced such intrusion errors (e.g., saying el instead of he) in 32 Spanish-English bilinguals who read aloud single-language (English or Spanish) and mixed-language (haphazard mix of English and Spanish) paragraphs with English or Spanish word order. These bilinguals produced language intrusions almost exclusively in mixed-language paragraphs, and most often when attempting to produce dominant-language targets (accent-only errors also exhibited reversed language-dominance effects). Most intrusion errors occurred for function words, especially when they were not from the language that determined the word order in the paragraph. Eye movements showed that fixating a word in the nontarget language increased intrusion errors only for function words. Together, these results imply multiple mechanisms of language control, including (a) inhibition of the dominant language at both lexical and sublexical processing levels, (b) special retrieval mechanisms for function words in mixed-language utterances, and (c) attentional monitoring of the target word for its match with the intended language.
Performance Analysis of an Inter-Relay Co-operation in FSO Communication System
NASA Astrophysics Data System (ADS)
Khanna, Himanshu; Aggarwal, Mona; Ahuja, Swaran
2018-04-01
In this work, we analyze the outage and error performance of a one-way inter-relay assisted free space optical link. The assumption of the absence of direct link between the source and destination node is being made for the analysis, and the feasibility of such system configuration is studied. We consider the influence of path loss, atmospheric turbulence and pointing error impairments, and investigate the effect of these parameters on the system performance. The turbulence-induced fading is modeled by independent but not necessarily identically distributed gamma-gamma fading statistics. The closed-form expressions for outage probability and probability of error are derived and illustrated by numerical plots. It is concluded that the absence of line of sight path between source and destination nodes does not lead to significant performance degradation. Moreover, for the system model under consideration, interconnected relaying provides better error performance than the non-interconnected relaying and dual-hop serial relaying techniques.
An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis
NASA Technical Reports Server (NTRS)
Wenger, David Paul
1991-01-01
The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.
Brodsky, Ethan K.; Klaers, Jessica L.; Samsonov, Alexey A.; Kijowski, Richard; Block, Walter F.
2014-01-01
Non-Cartesian imaging sequences and navigational methods can be more sensitive to scanner imperfections that have little impact on conventional clinical sequences, an issue which has repeatedly complicated the commercialization of these techniques by frustrating transitions to multi-center evaluations. One such imperfection is phase errors caused by resonant frequency shifts from eddy currents induced in the cryostat by time-varying gradients, a phenomemon known as B0 eddy currents. These phase errors can have a substantial impact on sequences that use ramp sampling, bipolar gradients, and readouts at varying azimuthal angles. We present a method for measuring and correcting phase errors from B0 eddy currents and examine the results on two different scanner models. This technique yields significant improvements in image quality for high-resolution joint imaging on certain scanners. The results suggest that correction of short time B0 eddy currents in manufacturer provided service routines would simplify adoption of non-Cartesian sampling methods. PMID:22488532
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-01-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-07-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
Wireless Monitoring of Liver Hemodynamics In Vivo
Akl, Tony J.; Wilson, Mark A.; Ericson, M. Nance; ...
2014-07-14
Liver transplants have their highest failure rate in the first two weeks following surgery. There are no devices for continuous, real-time monitoring of the graft, currently. Here, we present a continuous perfusion and oxygen consumption monitor based on photoplethysmography. The sensor is battery operated and communicates wirelessly with a data acquisition computer which provides the possibility of implantation provided sufficient miniaturization. In two in vivo porcine studies, the sensor tracked perfusion changes in hepatic tissue during vascular occlusions with a root mean square error (RMSE) of 0.125 mL/min/g of tissue. We show the possibility of using the pulsatile wave tomore » measure the arterial oxygen saturation similar to pulse oximetry. This signal is used as a feedback to extract the venous oxygen saturation from the DC levels. Arterial and venous oxygen saturation changes were measured with an RMSE of 2.19 and 1.39% respectively when no vascular occlusions were induced. The resulting error increased to 2.82 and 3.83% when vascular occlusions were induced during hypoxia. These errors are similar to the resolution of the oximetry catheter used as a reference. This work is the first realization of a wireless perfusion and oxygenation sensor for continuous monitoring of hepatic perfusion and oxygenation changes.« less
Average BER and outage probability of the ground-to-train OWC link in turbulence with rain
NASA Astrophysics Data System (ADS)
Zhang, Yixin; Yang, Yanqiu; Hu, Beibei; Yu, Lin; Hu, Zheng-Da
2017-09-01
The bit-error rate (BER) and outage probability of optical wireless communication (OWC) link for the ground-to-train of the curved track in turbulence with rain is evaluated. Considering the re-modulation effects of raining fluctuation on optical signal modulated by turbulence, we set up the models of average BER and outage probability in the present of pointing errors, based on the double inverse Gaussian (IG) statistical distribution model. The numerical results indicate that, for the same covered track length, the larger curvature radius increases the outage probability and average BER. The performance of the OWC link in turbulence with rain is limited mainly by the rain rate and pointing errors which are induced by the beam wander and train vibration. The effect of the rain rate on the performance of the link is more severe than the atmospheric turbulence, but the fluctuation owing to the atmospheric turbulence affects the laser beam propagation more greatly than the skewness of the rain distribution. Besides, the turbulence-induced beam wander has a more significant impact on the system in heavier rain. We can choose the size of transmitting and receiving apertures and improve the shockproof performance of the tracks to optimize the communication performance of the system.
A Slowed Cell Cycle Stabilizes the Budding Yeast Genome.
Vinton, Peter J; Weinert, Ted
2017-06-01
During cell division, aberrant DNA structures are detected by regulators called checkpoints that slow division to allow error correction. In addition to checkpoint-induced delay, it is widely assumed, though rarely shown, that merely slowing the cell cycle might allow more time for error detection and correction, thus resulting in a more stable genome. Fidelity by a slowed cell cycle might be independent of checkpoints. Here we tested the hypothesis that a slowed cell cycle stabilizes the genome, independent of checkpoints, in the budding yeast Saccharomyces cerevisiae We were led to this hypothesis when we identified a gene ( ERV14 , an ER cargo membrane protein) that when mutated, unexpectedly stabilized the genome, as measured by three different chromosome assays. After extensive studies of pathways rendered dysfunctional in erv14 mutant cells, we are led to the inference that no particular pathway is involved in stabilization, but rather the slowed cell cycle induced by erv14 stabilized the genome. We then demonstrated that, in genetic mutations and chemical treatments unrelated to ERV14 , a slowed cell cycle indeed correlates with a more stable genome, even in checkpoint-proficient cells. Data suggest a delay in G2/M may commonly stabilize the genome. We conclude that chromosome errors are more rarely made or are more readily corrected when the cell cycle is slowed (even ∼15 min longer in an ∼100-min cell cycle). And, some chromosome errors may not signal checkpoint-mediated responses, or do not sufficiently signal to allow correction, and their correction benefits from this "time checkpoint." Copyright © 2017 by the Genetics Society of America.
Predicting crystalline lens fall caused by accommodation from changes in wavefront error
He, Lin; Applegate, Raymond A.
2011-01-01
PURPOSE To illustrate and develop a method for estimating crystalline lens decentration as a function of accommodative response using changes in wavefront error and show the method and limitations using previously published data (2004) from 2 iridectomized monkey eyes so that clinicians understand how spherical aberration can induce coma, in particular in intraocular lens surgery. SETTINGS College of Optometry, University of Houston, Houston, USA. DESIGN Evaluation of diagnostic test or technology. METHODS Lens decentration was estimated by displacing downward the wavefront error of the lens with respect to the limiting aperture (7.0 mm) and ocular first surface wavefront error for each accommodative response (0.00 to 11.00 diopters) until measured values of vertical coma matched previously published experimental data (2007). Lens decentration was also calculated using an approximation formula that only included spherical aberration and vertical coma. RESULTS The change in calculated vertical coma was consistent with downward lens decentration. Calculated downward lens decentration peaked at approximately 0.48 mm of vertical decentration in the right eye and approximately 0.31 mm of decentration in the left eye using all Zernike modes through the 7th radial order. Calculated lens decentration using only coma and spherical aberration formulas was peaked at approximately 0.45 mm in the right eye and approximately 0.23 mm in the left eye. CONCLUSIONS Lens fall as a function of accommodation was quantified noninvasively using changes in vertical coma driven principally by the accommodation-induced changes in spherical aberration. The newly developed method was valid for a large pupil only. PMID:21700108
NASA Astrophysics Data System (ADS)
Clarke, John R.; Southerland, David
1999-07-01
Semi-closed circuit underwater breathing apparatus (UBA) provide a constant flow of mixed gas containing oxygen and nitrogen or helium to a diver. However, as a diver's work rate and metabolic oxygen consumption varies, the oxygen percentages within the UBA can change dramatically. Hence, even a resting diver can become hypoxic and become at risk for oxygen induced seizures. Conversely, a hard working diver can become hypoxic and lose consciousness. Unfortunately, current semi-closed UBA do not contain oxygen monitors. We describe a simple oxygen monitoring system designed and prototyped at the Navy Experimental Diving Unit. The main monitor components include a PIC microcontroller, analog-to-digital converter, bicolor LED, and oxygen sensor. The LED, affixed to the diver's mask is steady green if the oxygen partial pressure is within pre- defined acceptable limits. A more advanced monitor with a depth senor and additional computational circuitry could be used to estimate metabolic oxygen consumption. The computational algorithm uses the oxygen partial pressure and the diver's depth to compute O2 using the steady state solution of the differential equation describing oxygen concentrations within the UBA. Consequently, dive transients induce errors in the O2 estimation. To evalute these errors, we used a computer simulation of semi-closed circuit UBA dives to generate transient rich data as input to the estimation algorithm. A step change in simulated O2 elicits a monoexponential change in the estimated O2 with a time constant of 5 to 10 minutes. Methods for predicting error and providing a probable error indication to the diver are presented.
Zhang, You; Ma, Jianhua; Iyengar, Puneeth; Zhong, Yuncheng; Wang, Jing
2017-01-01
Purpose Sequential same-patient CT images may involve deformation-induced and non-deformation-induced voxel intensity changes. An adaptive deformation recovery and intensity correction (ADRIC) technique was developed to improve the CT reconstruction accuracy, and to separate deformation from non-deformation-induced voxel intensity changes between sequential CT images. Materials and Methods ADRIC views the new CT volume as a deformation of a prior high-quality CT volume, but with additional non-deformation-induced voxel intensity changes. ADRIC first applies the 2D-3D deformation technique to recover the deformation field between the prior CT volume and the new, to-be-reconstructed CT volume. Using the deformation-recovered new CT volume, ADRIC further corrects the non-deformation-induced voxel intensity changes with an updated algebraic reconstruction technique (‘ART-dTV’). The resulting intensity-corrected new CT volume is subsequently fed back into the 2D-3D deformation process to further correct the residual deformation errors, which forms an iterative loop. By ADRIC, the deformation field and the non-deformation voxel intensity corrections are optimized separately and alternately to reconstruct the final CT. CT myocardial perfusion imaging scenarios were employed to evaluate the efficacy of ADRIC, using both simulated data of the extended-cardiac-torso (XCAT) digital phantom and experimentally acquired porcine data. The reconstruction accuracy of the ADRIC technique was compared to the technique using ART-dTV alone, and to the technique using 2D-3D deformation alone. The relative error metric and the universal quality index metric are calculated between the images for quantitative analysis. The relative error is defined as the square root of the sum of squared voxel intensity differences between the reconstructed volume and the ‘ground-truth’ volume, normalized by the square root of the sum of squared ‘ground-truth’ voxel intensities. In addition to the XCAT and porcine studies, a physical lung phantom measurement study was also conducted. Water-filled balloons with various shapes/volumes and concentrations of iodinated contrasts were put inside the phantom to simulate both deformations and non-deformation-induced intensity changes for ADRIC reconstruction. The ADRIC-solved deformations and intensity changes from limited-view projections were compared to those of the ‘gold-standard’ volumes reconstructed from fully-sampled projections. Results For the XCAT simulation study, the relative errors of the reconstructed CT volume by the 2D-3D deformation technique, the ART-dTV technique and the ADRIC technique were 14.64%, 19.21% and 11.90% respectively, by using 20 projections for reconstruction. Using 60 projections for reconstruction reduced the relative errors to 12.33%, 11.04% and 7.92% for the three techniques, respectively. For the porcine study, the corresponding results were 13.61%, 8.78%, 6.80% by using 20 projections; and 12.14%, 6.91% and 5.29% by using 60 projections. The ADRIC technique also demonstrated robustness to varying projection exposure levels. For the physical phantom study, the average DICE coefficient between the initial prior balloon volume and the new ‘gold-standard’ balloon volumes was 0.460. ADRIC reconstruction by 21 projections increased the average DICE coefficient to 0.954. Conclusion The ADRIC technique outperformed both the 2D-3D deformation technique and the ART-dTV technique in reconstruction accuracy. The alternately solved deformation field and non-deformation voxel intensity corrections can benefit multiple clinical applications, including tumor tracking, radiotherapy dose accumulation and treatment outcome analysis. PMID:28380247
Salgado, Eduardo; Ribeiro, Fernando; Oliveira, José
2015-06-01
The demands to which football players are exposed during the match may augment the risk of injury by decreasing the sense of joint position. This study aimed to assess the effect of pre-participation warm-up and fatigue induced by an official football match on the knee-joint-position sense of football players. Fourteen semi-professional male football players (mean age: 25.9±4.6 years old) volunteered in this study. The main outcome measures were rate of perceived exertion and knee-joint-position sense assessed at rest, immediately after a standard warm-up (duration 25 min), and immediately after a competitive football match (90 minutes duration). Perceived exertion increased significantly from rest to the other assessments (rest: 8.6±2.0; after warm-up: 12.1±2.1; after football match: 18.5±1.3; p<0.001). Compared to rest, absolute angular error decreased significantly after the warm-up (4.1°±2.2° vs. 2.0°±1.0°; p=0.0045). After the match, absolute angular error (8.7°±3.8°) increased significantly comparatively to both rest (p=0.001) and the end of warm-up (p<0.001). Relative error showed directional bias with an underestimation of the target position, which was higher after the football match compared to both rest (p<0.001) and after warm-up (p<0.001). The results indicate that knee-joint-position sense acuity was increased by pre-participation warm-up exercise and was decreased by football match-induced fatigue. Warm-up exercises could contribute to knee injury prevention, whereas the deleterious effect of match-induced fatigue on the sensorimotor system could ultimately contribute to knee instability and injury. Copyright © 2014 Elsevier B.V. All rights reserved.
Monte Carlo simulation of particle-induced bit upsets
NASA Astrophysics Data System (ADS)
Wrobel, Frédéric; Touboul, Antoine; Vaillé, Jean-Roch; Boch, Jérôme; Saigné, Frédéric
2017-09-01
We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER) for a given device in a given environment.
CEMERLL: The Propagation of an Atmosphere-Compensated Laser Beam to the Apollo 15 Lunar Array
NASA Technical Reports Server (NTRS)
Fugate, R. Q.; Leatherman, P. R.; Wilson, K. E.
1997-01-01
Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes.
Lenticular accommodation in relation to ametropia: the chick model.
Choh, Vivian; Sivak, Jacob G
2005-03-04
Our goal was to determine whether experimentally induced ametropias have an effect on lenticular accommodation and spherical aberration. Form-deprivation myopia and hyperopia were induced in one eye of hatchling chicks by application of a translucent goggle and +15 D lens, respectively. After 7 days, eyes were enucleated and lenses were optically scanned prior to accommodation, during accommodation, and after accommodation. Accommodation was induced by electrical stimulation of the ciliary nerve. Lenticular focal lengths for form-deprived eyes were significantly shorter than for their controls and accommodation-associated changes in focal length were significantly smaller in myopic eyes compared to their controls. For eyes imposed with +15 D blur, focal lengths were longer than those for their controls and accommodative changes were greater. Spherical aberration of the lens increased with accommodation in both form-deprived and lens-treated birds, but induction of ametropia had no effect on lenticular spherical aberration in general. Nonmonotonicity from lenticular spherical aberration increased during accommodation but effects of refractive error were equivocal. The crystalline lens contributes to refractive error changes of the eye both in the case of myopia and hyperopia. These changes are likely attributable to global changes in the size and shape of the eye.
Vasylkivska, Veronika S.; Huerta, Nicolas J.
2017-06-24
Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog’s inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable withmore » respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.« less
NASA Astrophysics Data System (ADS)
Vasylkivska, Veronika S.; Huerta, Nicolas J.
2017-07-01
Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog's inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable with respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasylkivska, Veronika S.; Huerta, Nicolas J.
Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog’s inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable withmore » respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.« less
Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise
NASA Technical Reports Server (NTRS)
Sedlak, J.; Hashmall, J.
1997-01-01
Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.
Buzzell, George A; Troller-Renfree, Sonya V; Barker, Tyson V; Bowman, Lindsay C; Chronis-Tuscano, Andrea; Henderson, Heather A; Kagan, Jerome; Pine, Daniel S; Fox, Nathan A
2017-12-01
Behavioral inhibition (BI) is a temperament identified in early childhood that is a risk factor for later social anxiety. However, mechanisms underlying the development of social anxiety remain unclear. To better understand the emergence of social anxiety, longitudinal studies investigating changes at behavioral neural levels are needed. BI was assessed in the laboratory at 2 and 3 years of age (N = 268). Children returned at 12 years, and an electroencephalogram was recorded while children performed a flanker task under 2 conditions: once while believing they were being observed by peers and once while not being observed. This methodology isolated changes in error monitoring (error-related negativity) and behavior (post-error reaction time slowing) as a function of social context. At 12 years, current social anxiety symptoms and lifetime diagnoses of social anxiety were obtained. Childhood BI prospectively predicted social-specific error-related negativity increases and social anxiety symptoms in adolescence; these symptoms directly related to clinical diagnoses. Serial mediation analysis showed that social error-related negativity changes explained relations between BI and social anxiety symptoms (n = 107) and diagnosis (n = 92), but only insofar as social context also led to increased post-error reaction time slowing (a measure of error preoccupation); this model was not significantly related to generalized anxiety. Results extend prior work on socially induced changes in error monitoring and error preoccupation. These measures could index a neurobehavioral mechanism linking BI to adolescent social anxiety symptoms and diagnosis. This mechanism could relate more strongly to social than to generalized anxiety in the peri-adolescent period. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. All rights reserved.
Accuracy assessment of high-rate GPS measurements for seismology
NASA Astrophysics Data System (ADS)
Elosegui, P.; Davis, J. L.; Ekström, G.
2007-12-01
Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
NASA Astrophysics Data System (ADS)
Zhou, Yongbo; Sun, Xuejin; Mielonen, Tero; Li, Haoran; Zhang, Riwei; Li, Yan; Zhang, Chuanliang
2018-01-01
For inhomogeneous cirrus clouds, cloud optical thickness (COT) and effective diameter (De) provided by the Moderate Resolution Imaging Spectrometer (MODIS) Collection 6 cloud products are associated with errors due to the single habit assumption (SHA), independent pixel assumption (IPA), photon absorption effect (PAE), and plane-parallel assumption (PPA). SHA means that every cirrus cloud is assumed to have the same shape habit of ice crystals. IPA errors are caused by three-dimensional (3D) radiative effects. PPA and PAE errors are caused by cloud inhomogeneity. We proposed a method to single out these different errors. These errors were examined using the Spherical Harmonics Discrete Ordinate Method simulations done for the MODIS 0.86 μm and 2.13 μm bands. Four midlatitude and tropical cirrus cases were studied. For the COT retrieval, the impacts of SHA and IPA were especially large for optically thick cirrus cases. SHA errors in COT varied distinctly with scattering angles. For the De retrieval, SHA decreased De under most circumstances. PAE decreased De for optically thick cirrus cases. For the COT and De retrievals, the dominant error source was SHA for overhead sun whereas for oblique sun, it could be any of SHA, IPA, and PAE, varying with cirrus cases and sun-satellite viewing geometries. On the domain average, the SHA errors in COT (De) were within -16.1%-42.6% (-38.7%-2.0%), whereas the 3-D radiative effects- and cloud inhomogeneity-induced errors in COT (De) were within -5.6%-19.6% (-2.9%-8.0%) and -2.6%-0% (-3.7%-9.8%), respectively.
Kaneko, Takaaki; Tomonaga, Masaki
2014-06-01
Humans are often unaware of how they control their limb motor movements. People pay attention to their own motor movements only when their usual motor routines encounter errors. Yet little is known about the extent to which voluntary actions rely on automatic control and when automatic control shifts to deliberate control in nonhuman primates. In this study, we demonstrate that chimpanzees and humans showed similar limb motor adjustment in response to feedback error during reaching actions, whereas attentional allocation inferred from gaze behavior differed. We found that humans shifted attention to their own motor kinematics as errors were induced in motor trajectory feedback regardless of whether the errors actually disrupted their reaching their action goals. In contrast, chimpanzees shifted attention to motor execution only when errors actually interfered with their achieving a planned action goal. These results indicate that the species differed in their criteria for shifting from automatic to deliberate control of motor actions. It is widely accepted that sophisticated motor repertoires have evolved in humans. Our results suggest that the deliberate monitoring of one's own motor kinematics may have evolved in the human lineage. Copyright © 2014 Elsevier B.V. All rights reserved.
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
NASA Astrophysics Data System (ADS)
Colins, Karen; Li, Liqian; Liu, Yu
2017-05-01
Mass production of widely used semiconductor digital integrated circuits (ICs) has lowered unit costs to the level of ordinary daily consumables of a few dollars. It is therefore reasonable to contemplate the idea of an engineered system that consumes unshielded low-cost ICs for the purpose of measuring gamma radiation dose. Underlying the idea is the premise of a measurable correlation between an observable property of ICs and radiation dose. Accumulation of radiation-damage-induced state changes or error events is such a property. If correct, the premise could make possible low-cost wide-area radiation dose measurement systems, instantiated as wireless sensor networks (WSNs) with unshielded consumable ICs as nodes, communicating error events to a remote base station. The premise has been investigated quantitatively for the first time in laboratory experiments and related analyses performed at the Canadian Nuclear Laboratories. State changes or error events were recorded in real time during irradiation of samples of ICs of different types in a 60Co gamma cell. From the error-event sequences, empirical distribution functions of dose were generated. The distribution functions were inverted and probabilities scaled by total error events, to yield plots of the relationship between dose and error tallies. Positive correlation was observed, and discrete functional dependence of dose quantiles on error tallies was measured, demonstrating the correctness of the premise. The idea of an engineered system that consumes unshielded low-cost ICs in a WSN, for the purpose of measuring gamma radiation dose over wide areas, is therefore tenable.
Radiation induced leakage due to stochastic charge trapping in isolation layers of nanoscale MOSFETs
NASA Astrophysics Data System (ADS)
Zebrev, G. I.; Gorbunov, M. S.; Pershenkov, V. S.
2008-03-01
The sensitivity of sub-100 nm devices to microdose effects, which can be considered as intermediate case between cumulative total dose and single event errors, is investigated. A detailed study of radiation-induced leakage due to stochastic charge trapping in irradiated planar and nonplanar devices is developed. The influence of High-K insulators on nanoscale ICs reliability is discussed. Low critical values of trapped charge demonstrate a high sensitivity to single event effect.
Overlay improvement by exposure map based mask registration optimization
NASA Astrophysics Data System (ADS)
Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric
2015-03-01
Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density could also achieve under 5nm performance. We assume mask registration excluding random error is mostly induced by charge accumulation during mask writing, which may be calculated from surrounding exposed pattern density. Multi-loading test mask registration result shows that with x direction writing sequence, mask registration behavior in x direction is mainly related to sequence direction, but mask registration in y direction would be highly impacted by pattern density distribution map. It proves part of mask registration error is due to charge issue from nearby environment. If exposure sequence is chip by chip for normal multi chip layout case, mask registration of both x and y direction would be impacted analogously, which has also been proved by real data. Therefore, we try to set up a simple model to predict the mask registration error based on mask exposure map, and correct it with the given POSCOR (position correction) file for advanced mask writing if needed.
Computation and measurement of cell decision making errors using single cell data
Habibi, Iman; Cheong, Raymond; Levchenko, Andre; Emamian, Effat S.; Abdi, Ali
2017-01-01
In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF—NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell’s inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves. PMID:28379950
Computation and measurement of cell decision making errors using single cell data.
Habibi, Iman; Cheong, Raymond; Lipniacki, Tomasz; Levchenko, Andre; Emamian, Effat S; Abdi, Ali
2017-04-01
In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF-NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell's inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves.
Effect of tumor amplitude and frequency on 4D modeling of Vero4DRT system.
Miura, Hideharu; Ozawa, Shuichi; Hayata, Masahiro; Tsuda, Shintaro; Yamada, Kiyoshi; Nagata, Yasushi
2017-01-01
An important issue in indirect dynamic tumor tracking with the Vero4DRT system is the accuracy of the model predictions of the internal target position based on surrogate infrared (IR) marker measurement. We investigated the predictive uncertainty of 4D modeling using an external IR marker, focusing on the effect of the target and surrogate amplitudes and periods. A programmable respiratory motion table was used to simulate breathing induced organ motion. Sinusoidal motion sequences were produced by a dynamic phantom with different amplitudes and periods. To investigate the 4D modeling error, the following amplitudes (peak-to-peak: 10-40 mm) and periods (2-8 s) were considered. The 95th percentile 4D modeling error (4D- E 95% ) between the detected and predicted target position ( μ + 2SD) was calculated to investigate the 4D modeling error. 4D- E 95% was linearly related to the target motion amplitude with a coefficient of determination R 2 = 0.99 and ranged from 0.21 to 0.88 mm. The 4D modeling error ranged from 1.49 to 0.14 mm and gradually decreased with increasing target motion period. We analyzed the predictive error in 4D modeling and the error due to the amplitude and period of target. 4D modeling error substantially increased with increasing amplitude and decreasing period of the target motion.
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsuta, Y; Tohoku University Graduate School of Medicine, Sendal, Miyagi; Kadoya, N
Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dosemore » calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.« less
Air pollution exposure modeling of individuals
Air pollution epidemiology studies of ambient fine particulate matter (PM2.5) often use outdoor concentrations as exposure surrogates. These surrogates can induce exposure error since they do not account for (1) time spent indoors with ambient PM2.5 levels attenuated from outdoor...
NASA Astrophysics Data System (ADS)
Gautam, Ghaneshwar; Surmick, David M.; Parigger, Christian G.
2015-07-01
In this letter, we present a brief comment regarding the recently published paper by Ivković et al., J Quant Spectrosc Radiat Transf 2015;154:1-8. Reference is made to previous experimental results to indicate that self absorption must have occurred; however, when carefully considering error propagation, both widths and peak-separation predict electron densities within the error margins. Yet the diagnosis method and the presented details on the use of the hydrogen beta peak separation are viewed as a welcomed contribution in studies of laser-induced plasma.
Effects of dynamic aeroelasticity on handling qualities and pilot rating
NASA Technical Reports Server (NTRS)
Swaim, R. L.; Yen, W.-Y.
1978-01-01
Pilot performance parameters, such as pilot ratings, tracking errors, and pilot comments, were recorded and analyzed for a longitudinal pitch tracking task on a large, flexible aircraft. The tracking task was programmed on a fixed-base simulator with a CRT attitude director display of pitch angle command, pitch angle, and pitch angle error. Parametric variations in the undamped natural frequencies of the two lowest frequency symmetric elastic modes were made to induce varying degrees of rigid body and elastic mode interaction. The results indicate that such mode interaction can drastically affect the handling qualities and pilot ratings of the task.
ERRATUM: 'MAPPING THE GAS TURBULENCE IN THE COMA CLUSTER: PREDICTIONS FOR ASTRO-H'
NASA Technical Reports Server (NTRS)
Zuhone, J. A.; Markevitch, M.; Zhuravleva, I.
2016-01-01
The published version of this paper contained an error in Figure 5. This figure is intended to show the effect on the structure function of subtracting the bias induced by the statistical and systematic errors on the line shift. The filled circles show the bias-subtracted structure function. The positions of these points in the left panel of the original figure were calculated incorrectly. The figure is reproduced below (with the original caption) with the correct values for the bias-subtracted structure function. No other computations or figures in the original manuscript are affected.
Performance Sensitivity Studies on the PIAA Implementation of the High-Contrast Imaging Testbed
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Lou, John; Shaklan, Stuart; Levine, Marie
2010-01-01
This slide presentation reviews the sensitivity studies on the Phase-Induced Amplitude Apodization (PIAA), or pupil mapping using the High-Contrast Imaging Testbed (HCIT). PIAA is a promising technique in high-dynamic range stellar coronagraph. This presentation reports on the investigation of the effects of the phase and rigid-body errors of various optics on the narrowband contrast performance of the PIAA/HCIT hybrid system. The results have shown that the 2-step wavefront control method utilizing 2-DMs is quite effective in compensating the effects of realistic phase and rigid-body errors of various optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Haoyu S.; Zhang, Wenjing; Verma, Pragya
2015-01-01
The goal of this work is to develop a gradient approximation to the exchange–correlation functional of Kohn–Sham density functional theory for treating molecular problems with a special emphasis on the prediction of quantities important for homogeneous catalysis and other molecular energetics. Our training and validation of exchange–correlation functionals is organized in terms of databases and subdatabases. The key properties required for homogeneous catalysis are main group bond energies (database MGBE137), transition metal bond energies (database TMBE32), reaction barrier heights (database BH76), and molecular structures (database MS10). We also consider 26 other databases, most of which are subdatabases of a newlymore » extended broad database called Database 2015, which is presented in the present article and in its ESI. Based on the mathematical form of a nonseparable gradient approximation (NGA), as first employed in the N12 functional, we design a new functional by using Database 2015 and by adding smoothness constraints to the optimization of the functional. The resulting functional is called the gradient approximation for molecules, or GAM. The GAM functional gives better results for MGBE137, TMBE32, and BH76 than any available generalized gradient approximation (GGA) or than N12. The GAM functional also gives reasonable results for MS10 with an MUE of 0.018 Å. The GAM functional provides good results both within the training sets and outside the training sets. The convergence tests and the smooth curves of exchange–correlation enhancement factor as a function of the reduced density gradient show that the GAM functional is a smooth functional that should not lead to extra expense or instability in optimizations. NGAs, like GGAs, have the advantage over meta-GGAs and hybrid GGAs of respectively smaller grid-size requirements for integrations and lower costs for extended systems. These computational advantages combined with the relatively high accuracy for all the key properties needed for molecular catalysis make the GAM functional very promising for future applications.« less
Pricing Employee Stock Options (ESOs) with Random Lattice
NASA Astrophysics Data System (ADS)
Chendra, E.; Chin, L.; Sukmana, A.
2018-04-01
Employee Stock Options (ESOs) are stock options granted by companies to their employees. Unlike standard options that can be traded by typical institutional or individual investors, employees cannot sell or transfer their ESOs to other investors. The sale restrictions may induce the ESO’s holder to exercise them earlier. In much cited paper, Hull and White propose a binomial lattice in valuing ESOs which assumes that employees will exercise voluntarily their ESOs if the stock price reaches a horizontal psychological barrier. Due to nonlinearity errors, the numerical pricing results oscillate significantly so they may lead to large pricing errors. In this paper, we use the random lattice method to price the Hull-White ESOs model. This method can reduce the nonlinearity error by aligning a layer of nodes of the random lattice with a psychological barrier.
NASA Astrophysics Data System (ADS)
Huber, Matthew S.; Ferriãre, Ludovic; Losiak, Anna; Koeberl, Christian
2011-09-01
Abstract- Planar deformation features (PDFs) in quartz, one of the most commonly used diagnostic indicators of shock metamorphism, are planes of amorphous material that follow crystallographic orientations, and can thus be distinguished from non-shock-induced fractures in quartz. The process of indexing data for PDFs from universal-stage measurements has traditionally been performed using a manual graphical method, a time-consuming process in which errors can easily be introduced. A mathematical method and computer algorithm, which we call the Automated Numerical Index Executor (ANIE) program for indexing PDFs, was produced, and is presented here. The ANIE program is more accurate and faster than the manual graphical determination of Miller-Bravais indices, as it allows control of the exact error used in the calculation and removal of human error from the process.
Stereotype threat can reduce older adults' memory errors.
Barber, Sarah J; Mather, Mara
2013-01-01
Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research, we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment. Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 and 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well.
Eom, Youngsub; Ryu, Dongok; Kim, Dae Wook; Yang, Seul Ki; Song, Jong Suk; Kim, Sug-Whan; Kim, Hyo Myung
2016-10-01
To evaluate the toric intraocular lens (IOL) calculation considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and effective lens position (ELP). Two thousand samples of corneal parameters with keratometric astigmatism ≥ 1.0 D were obtained using bootstrap methods. The probability distributions for incision-induced keratometric and posterior corneal astigmatisms, as well as ELP were estimated from the literature review. The predicted residual astigmatism error using method D with an IOL add power calculator (IAPC) was compared with those derived using methods A, B, and C through Monte-Carlo simulation. Method A considered the keratometric astigmatism and incision-induced keratometric astigmatism, method B considered posterior corneal astigmatism in addition to the A method, method C considered incision-induced posterior corneal astigmatism in addition to the B method, and method D considered ELP in addition to the C method. To verify the IAPC used in this study, the predicted toric IOL cylinder power and its axis using the IAPC were compared with ray-tracing simulation results. The median magnitude of the predicted residual astigmatism error using method D (0.25 diopters [D]) was smaller than that derived using methods A (0.42 D), B (0.38 D), and C (0.28 D) respectively. Linear regression analysis indicated that the predicted toric IOL cylinder power and its axis had excellent goodness-of-fit between the IAPC and ray-tracing simulation. The IAPC is a simple but accurate method for predicting the toric IOL cylinder power and its axis considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and ELP.
Scanner qualification with IntenCD based reticle error correction
NASA Astrophysics Data System (ADS)
Elblinger, Yair; Finders, Jo; Demarteau, Marcel; Wismans, Onno; Minnaert Janssen, Ingrid; Duray, Frank; Ben Yishai, Michael; Mangan, Shmoolik; Cohen, Yaron; Parizat, Ziv; Attal, Shay; Polonsky, Netanel; Englard, Ilan
2010-03-01
Scanner introduction into the fab production environment is a challenging task. An efficient evaluation of scanner performance matrices during factory acceptance test (FAT) and later on during site acceptance test (SAT) is crucial for minimizing the cycle time for pre and post production-start activities. If done effectively, the matrices of base line performance established during the SAT are used as a reference for scanner performance and fleet matching monitoring and maintenance in the fab environment. Key elements which can influence the cycle time of the SAT, FAT and maintenance cycles are the imaging, process and mask characterizations involved with those cycles. Discrete mask measurement techniques are currently in use to create across-mask CDU maps. By subtracting these maps from their final wafer measurement CDU map counterparts, it is possible to assess the real scanner induced printed errors within certain limitations. The current discrete measurement methods are time consuming and some techniques also overlook mask based effects other than line width variations, such as transmission and phase variations, all of which influence the final printed CD variability. Applied Materials Aera2TM mask inspection tool with IntenCDTM technology can scan the mask at high speed, offer full mask coverage and accurate assessment of all masks induced source of errors simultaneously, making it beneficial for scanner qualifications and performance monitoring. In this paper we report on a study that was done to improve a scanner introduction and qualification process using the IntenCD application to map the mask induced CD non uniformity. We will present the results of six scanners in production and discuss the benefits of the new method.
Lo, Te-Wen; Pickle, Catherine S; Lin, Steven; Ralston, Edward J; Gurling, Mark; Schartner, Caitlin M; Bian, Qian; Doudna, Jennifer A; Meyer, Barbara J
2013-10-01
Exploitation of custom-designed nucleases to induce DNA double-strand breaks (DSBs) at genomic locations of choice has transformed our ability to edit genomes, regardless of their complexity. DSBs can trigger either error-prone repair pathways that induce random mutations at the break sites or precise homology-directed repair pathways that generate specific insertions or deletions guided by exogenously supplied DNA. Prior editing strategies using site-specific nucleases to modify the Caenorhabditis elegans genome achieved only the heritable disruption of endogenous loci through random mutagenesis by error-prone repair. Here we report highly effective strategies using TALE nucleases and RNA-guided CRISPR/Cas9 nucleases to induce error-prone repair and homology-directed repair to create heritable, precise insertion, deletion, or substitution of specific DNA sequences at targeted endogenous loci. Our robust strategies are effective across nematode species diverged by 300 million years, including necromenic nematodes (Pristionchus pacificus), male/female species (Caenorhabditis species 9), and hermaphroditic species (C. elegans). Thus, genome-editing tools now exist to transform nonmodel nematode species into genetically tractable model organisms. We demonstrate the utility of our broadly applicable genome-editing strategies by creating reagents generally useful to the nematode community and reagents specifically designed to explore the mechanism and evolution of X chromosome dosage compensation. By developing an efficient pipeline involving germline injection of nuclease mRNAs and single-stranded DNA templates, we engineered precise, heritable nucleotide changes both close to and far from DSBs to gain or lose genetic function, to tag proteins made from endogenous genes, and to excise entire loci through targeted FLP-FRT recombination.
Van, Anh T.; Weidlich, Dominik; Kooijman, Hendrick; Hock, Andreas; Rummeny, Ernst J.; Gersing, Alexandra; Kirschke, Jan S.; Karampinos, Dimitrios C.
2018-01-01
Purpose To perform in vivo isotropic‐resolution diffusion tensor imaging (DTI) of lumbosacral and sciatic nerves with a phase‐navigated diffusion‐prepared (DP) 3D turbo spin echo (TSE) acquisition and modified reconstruction incorporating intershot phase‐error correction and to investigate the improvement on image quality and diffusion quantification with the proposed phase correction. Methods Phase‐navigated DP 3D TSE included magnitude stabilizers to minimize motion and eddy‐current effects on the signal magnitude. Phase navigation of motion‐induced phase errors was introduced before readout in 3D TSE. DTI of lower back nerves was performed in vivo using 3D TSE and single‐shot echo planar imaging (ss‐EPI) in 13 subjects. Diffusion data were phase‐corrected per k z plane with respect to T2‐weighted data. The effects of motion‐induced phase errors on DTI quantification was assessed for 3D TSE and compared with ss‐EPI. Results Non–phase‐corrected 3D TSE resulted in artifacts in diffusion‐weighted images and overestimated DTI parameters in the sciatic nerve (mean diffusivity [MD] = 2.06 ± 0.45). Phase correction of 3D TSE DTI data resulted in reductions in all DTI parameters (MD = 1.73 ± 0.26) of statistical significance (P ≤ 0.001) and in closer agreement with ss‐EPI DTI parameters (MD = 1.62 ± 0.21). Conclusion DP 3D TSE with phase correction allows distortion‐free isotropic diffusion imaging of lower back nerves with robustness to motion‐induced artifacts and DTI quantification errors. Magn Reson Med 80:609–618, 2018. © 2018 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. PMID:29380414
NASA Astrophysics Data System (ADS)
Yarloo, H.; Langari, A.; Vaezi, A.
2018-02-01
We enquire into the quasi many-body localization in topologically ordered states of matter, revolving around the case of Kitaev toric code on the ladder geometry, where different types of anyonic defects carry different masses induced by environmental errors. Our study verifies that the presence of anyons generates a complex energy landscape solely through braiding statistics, which suffices to suppress the diffusion of defects in such clean, multicomponent anyonic liquid. This nonergodic dynamics suggests a promising scenario for investigation of quasi many-body localization. Computing standard diagnostics evidences that a typical initial inhomogeneity of anyons gives birth to a glassy dynamics with an exponentially diverging time scale of the full relaxation. Our results unveil how self-generated disorder ameliorates the vulnerability of topological order away from equilibrium. This setting provides a new platform which paves the way toward impeding logical errors by self-localization of anyons in a generic, high energy state, originated exclusively in their exotic statistics.
NASA Technical Reports Server (NTRS)
Belcastro, C. M.
1984-01-01
A methodology was developed a assess the upset susceptibility/reliability of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general purpose microprocessor were studied. The upset tests involved the random input of analog transients which model lightning induced signals onto interface lines of an 8080 based microcomputer from which upset error data was recorded. The program code on the microprocessor during tests is designed to exercise all of the machine cycles and memory addressing techniques implemented in the 8080 central processing unit. A statistical analysis is presented in which possible correlations are established between the probability of upset occurrence and transient signal inputs during specific processing states and operations. A stochastic upset susceptibility model for the 8080 microprocessor is presented. The susceptibility of this microprocessor to upset, once analog transients have entered the system, is determined analytically by calculating the state probabilities of the stochastic model.
NASA Technical Reports Server (NTRS)
Daily, J. W.
1978-01-01
Laser induced fluorescence spectroscopy of flames is discussed, and derived uncertainty relations are used to calculate detectability limits due to statistical errors. Interferences due to Rayleigh scattering from molecules as well as Mie scattering and incandescence from particles have been examined for their effect on detectability limits. Fluorescence trapping is studied, and some methods for reducing the effect are considered. Fluorescence trapping places an upper limit on the number density of the fluorescing species that can be measured without signal loss.
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.
1989-01-01
Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.
Effects of Foveal Ablation on Emmetropization and Form-Deprivation Myopia
Smith, Earl L.; Ramamirtham, Ramkumar; Qiao-Grider, Ying; Hung, Li-Fang; Huang, Juan; Kee, Chea-su; Coats, David; Paysse, Evelyn
2009-01-01
Purpose Because of the prominence of central vision in primates, it has generally been assumed that signals from the fovea dominate refractive development. To test this assumption, the authors determined whether an intact fovea was essential for either normal emmetropization or the vision-induced myopic errors produced by form deprivation. Methods In 13 rhesus monkeys at 3 weeks of age, the fovea and most of the perifovea in one eye were ablated by laser photocoagulation. Five of these animals were subsequently allowed unrestricted vision. For the other eight monkeys with foveal ablations, a diffuser lens was secured in front of the treated eyes to produce form deprivation. Refractive development was assessed along the pupillary axis by retinoscopy, keratometry, and A-scan ultrasonography. Control data were obtained from 21 normal monkeys and three infants reared with plano lenses in front of both eyes. Results Foveal ablations had no apparent effect on emmetropization. Refractive errors for both eyes of the treated infants allowed unrestricted vision were within the control range throughout the observation period, and there were no systematic interocular differences in refractive error or axial length. In addition, foveal ablation did not prevent form deprivation myopia; six of the eight infants that experienced monocular form deprivation developed myopic axial anisometropias outside the control range. Conclusions Visual signals from the fovea are not essential for normal refractive development or the vision-induced alterations in ocular growth produced by form deprivation. Conversely, the peripheral retina, in isolation, can regulate emmetropizing responses and produce anomalous refractive errors in response to abnormal visual experience. These results indicate that peripheral vision should be considered when assessing the effects of visual experience on refractive development. PMID:17724167
GOSAT CO2 retrieval results using TANSO-CAI aerosol information over East Asia
NASA Astrophysics Data System (ADS)
KIM, M.; Kim, W.; Jung, Y.; Lee, S.; Kim, J.; Lee, H.; Boesch, H.; Goo, T. Y.
2015-12-01
In the satellite remote sensing of CO2, incorrect aerosol information could induce large errors as previous studies suggested. Many factors, such as, aerosol type, wavelength dependency of AOD, aerosol polarization effect and etc. have been main error sources. Due to these aerosol effects, large number of data retrieved are screened out in quality control, or retrieval errors tend to increase if not screened out, especially in East Asia where aerosol concentrations are fairly high. To reduce these aerosol induced errors, a CO2 retrieval algorithm using the simultaneous TANSO-CAI aerosol information is developed. This algorithm adopts AOD and aerosol type information as a priori information from the CAI aerosol retrieval algorithm. The CO2 retrieval algorithm based on optimal estimation method and VLIDORT, a vector discrete ordinate radiative transfer model. The CO2 algorithm, developed with various state vectors to find accurate CO2 concentration, shows reasonable results when compared with other dataset. This study concentrates on the validation of retrieved results with the ground-based TCCON measurements in East Asia and the comparison with the previous retrieval from ACOS, NIES, and UoL. Although, the retrieved CO2 concentration is lower than previous results by ppm's, it shows similar trend and high correlation with previous results. Retrieved data and TCCON measurements data are compared at three stations of Tsukuba, Saga, Anmyeondo in East Asia, with the collocation criteria of ±2°in latitude/longitude and ±1 hours of GOSAT passing time. Compared results also show similar trend with good correlation. Based on the TCCON comparison results, bias correction equation is calculated and applied to the East Asia data.
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
Precision of spiral-bevel gears
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1983-01-01
The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry I gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion. Previously announced in STAR as N82-30552
Dichrometer errors resulting from large signals or improper modulator phasing.
Sutherland, John C
2012-09-01
A single-beam spectrometer equipped with a photoelastic modulator can be configured to measure a number of different parameters useful in characterizing chemical and biochemical materials including natural and magnetic circular dichroism, linear dichroism, natural and magnetic fluorescence-detected circular dichroism, and fluorescence polarization anisotropy as well as total absorption and fluorescence. The derivations of the mathematical expressions used to extract these parameters from ultraviolet, visible, and near-infrared light-induced electronic signals in a dichrometer assume that the dichroic signals are sufficiently small that certain mathematical approximations will not introduce significant errors. This article quantifies errors resulting from these assumptions as a function of the magnitude of the dichroic signals. In the case of linear dichroism, improper modulator programming can result in errors greater than those resulting from the assumption of small signal size, whereas for fluorescence polarization anisotropy, improper modulator phase alone gives incorrect results. Modulator phase can also impact the values of total absorbance recorded simultaneously with linear dichroism and total fluorescence. Copyright © 2012 Wiley Periodicals, Inc., A Wiley Company.
Modeling for IFOG Vibration Error Based on the Strain Distribution of Quadrupolar Fiber Coil
Gao, Zhongxing; Zhang, Yonggang; Zhang, Yunhao
2016-01-01
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environment, especially in vibrational environment, is necessary for its practical applications. This paper presents a mathematical model for IFOG to theoretically compute the short-term rate errors caused by mechanical vibration. The computational procedures are mainly based on the strain distribution of quadrupolar fiber coil measured by stress analyzer. The definition of asymmetry of strain distribution (ASD) is given in the paper to evaluate the winding quality of the coil. The established model reveals that the high ASD and the variable fiber elastic modulus in large strain situation are two dominant reasons that give rise to nonreciprocity phase shift in IFOG under vibration. Furthermore, theoretical analysis and computational results indicate that vibration errors of both open-loop and closed-loop IFOG increase with the raise of vibrational amplitude, vibrational frequency and ASD. Finally, an estimation of vibration-induced IFOG errors in aircraft is done according to the proposed model. Our work is meaningful in designing IFOG coils to achieve a better anti-vibration performance. PMID:27455257
Error field optimization in DIII-D using extremum seeking control
NASA Astrophysics Data System (ADS)
Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; Humphreys, D. A.; Eidietis, N.; Hanson, J. M.; Paz-Soldan, C.; Strait, E. J.; Walker, M. L.
2016-07-01
DIII-D experiments have demonstrated a new real-time approach to tokamak error field control based on maximizing the toroidal angular momentum. This approach uses extremum seeking control theory to optimize the error field in real time without inducing instabilities. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coil currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.
Smartphone virtual reality to increase clinical balance assessment responsiveness.
Rausch, Matthew; Simon, Janet E; Starkey, Chad; Grooms, Dustin R
2018-05-22
To determine if a low cost smartphone based, clinically applicable virtual reality (VR) modification to the standard Balance Error Scoring System (BESS) can challenge postural stability beyond the traditional BESS. Cross-sectional study. University research laboratory. 28 adults (mean age 23.36 ± 2.38 years, mean height 1.74 m ± 0.13, mean weight 77.95 kg ± 16.63). BESS postural control errors and center of pressure (CoP) velocity were recorded during the BESS test and a VR modified BESS (VR-BESS). The VR-BESS used a headset and phone to display a rollercoaster ride to induce a visual and vestibular challenge to postural stability. The VR-BESS significantly increased total errors (20.93 vs. 11.42, p < 0.05) and CoP velocity summed across all stances and surfaces (52.96 cm/s vs. 37.73 cm/s, p < 0.05) beyond the traditional BESS. The VR-BESS provides a standardized, and effective way to increase postural stability challenge in the clinical setting. The VR-BESS can use any smartphone technology to induce postural stability deficits that may otherwise normalize with traditional testing. Thus, providing a unique relatively inexpensive and simple to operate clinical assessment tool and∖or training stimulus. Copyright © 2018 Elsevier Ltd. All rights reserved.
CORRECTION OF THE INERTIAL EFFECT RESULTING FROM A PLATE MOVING UNDER LOW FRICTION CONDITIONS
Yang, Feng; Pai, Yi-Chung
2007-01-01
The purpose of the present study was to develop a set of equations that can be employed to remove the inertial effect introduced by the movable platform upon which a person stands during a slip induced in gait; this allows the real ground reaction force (GRF) and its center of pressure (COP) to be determined. Analyses were also performed to determine how sensitive the COP offsets were to the changes of the parameters in the equation that affected the correction of the inertial effect. In addition, the results were verified empirically using a low friction movable platform together with a stationary object, a pendulum, and human subjects during a slip induced during gait. Our analyses revealed that the amount of correction required for the inertial effect due to the movable component is affected by its mass and its center of mass (COM) position, acceleration, the friction coefficient, and the landing position of the foot relative to the COM. The maximum error in the horizontal component of the GRF was close to 0.09 body weight during the recovery from a slip in walking. When uncorrected, the maximum error in the COP measurement could reach as much as 4 cm. Finally, these errors were magnified in the joint moment computation and propagated proximally, ranging from 0.2 to 1.0 Nm/body mass from the ankle to the hip. PMID:17306274
Scheidegger, Rachel; Vinogradov, Elena; Alsop, David C
2011-01-01
Amide proton transfer (APT) imaging has shown promise as an indicator of tissue pH and as a marker for brain tumors. Sources of error in APT measurements include direct water saturation, and magnetization transfer (MT) from membranes and macromolecules. These are typically suppressed by post-processing asymmetry analysis. However, this approach is strongly dependent on B0 homogeneity and can introduce additional errors due to intrinsic MT asymmetry, aliphatic proton features opposite the amide peak, and radiation damping-induced asymmetry. Although several methods exist to correct for B0 inhomogeneity, they tremendously increase scan times and do not address errors induced by asymmetry of the z-spectrum. In this paper, a novel saturation scheme - saturation with frequency alternating RF irradiation (SAFARI) - is proposed in combination with a new magnetization transfer ratio (MTR) parameter designed to generate APT images insensitive to direct water saturation and MT, even in the presence of B0 inhomogeneity. The feasibility of the SAFARI technique is demonstrated in phantoms and in the human brain. Experimental results show that SAFARI successfully removes direct water saturation and MT contamination from APT images. It is insensitive to B0 offsets up to 180Hz without using additional B0 correction, thereby dramatically reducing scanning time. PMID:21608029
Event-related potentials in response to violations of content and temporal event knowledge.
Drummer, Janna; van der Meer, Elke; Schaadt, Gesa
2016-01-08
Scripts that store knowledge of everyday events are fundamentally important for managing daily routines. Content event knowledge (i.e., knowledge about which events belong to a script) and temporal event knowledge (i.e., knowledge about the chronological order of events in a script) constitute qualitatively different forms of knowledge. However, there is limited information about each distinct process and the time course involved in accessing content and temporal event knowledge. Therefore, we analyzed event-related potentials (ERPs) in response to either correctly presented event sequences or event sequences that contained a content or temporal error. We found an N400, which was followed by a posteriorly distributed P600 in response to content errors in event sequences. By contrast, we did not find an N400 but an anteriorly distributed P600 in response to temporal errors in event sequences. Thus, the N400 seems to be elicited as a response to a general mismatch between an event and the established event model. We assume that the expectancy violation of content event knowledge, as indicated by the N400, induces the collapse of the established event model, a process indicated by the posterior P600. The expectancy violation of temporal event knowledge is assumed to induce an attempt to reorganize the event model in working memory, a process indicated by the frontal P600. Copyright © 2015 Elsevier Ltd. All rights reserved.
3-D decoupled inversion of complex conductivity data in the real number domain
NASA Astrophysics Data System (ADS)
Johnson, Timothy C.; Thomle, Jonathan
2018-01-01
Complex conductivity imaging (also called induced polarization imaging or spectral induced polarization imaging when conducted at multiple frequencies) involves estimating the frequency-dependent complex electrical conductivity distribution of the subsurface. The superior diagnostic capabilities provided by complex conductivity spectra have driven advancements in mechanistic understanding of complex conductivity as well as modelling and inversion approaches over the past several decades. In this work, we demonstrate the theory and application for an approach to 3-D modelling and inversion of complex conductivity data in the real number domain. Beginning from first principles, we demonstrate how the equations for the real and imaginary components of the complex potential may be decoupled. This leads to a description of the real and imaginary source current terms, and a corresponding assessment of error arising from an assumption necessary to complete the decoupled modelling. We show that for most earth materials, which exhibit relatively small phases (e.g. less than 0.2 radians) in complex conductivity, these errors become insignificant. For higher phase materials, the errors may be quantified and corrected through an iterative procedure. We demonstrate the accuracy of numerical forward solutions by direct comparison to corresponding analytic solutions. We demonstrate the inversion using both synthetic and field examples with data collected over a waste infiltration trench, at frequencies ranging from 0.5 to 7.5 Hz.
Simulation of a long-term aquifer test conducted near the Rio Grande, Albuquerque, New Mexico
McAda, Douglas P.
2001-01-01
A long-term aquifer test was conducted near the Rio Grande in Albuquerque during January and February 1995 using 22 wells and piezometers at nine sites, with the City of Albuquerque Griegos 1 production well as the pumped well. Griegos 1 discharge averaged about 2,330 gallons per minute for 54.4 days. A three-dimensional finite-difference ground-water-flow model was used to estimate aquifer properties in the vicinity of the Griegos well field and the amount of infiltration induced into the aquifer system from the Rio Grande and riverside drains as a result of pumping during the test. The model was initially calibrated by trial-and-error adjustments of the aquifer properties. The model was recalibrated using a nonlinear least-squares regression technique. The aquifer system in the area includes the middle Tertiary to Quaternary Santa Fe Group and post-Santa Fe Group valley- and basin-fill deposits of the Albuquerque Basin. The Rio Grande and adjacent riverside drains are in hydraulic connection with the aquifer system. The hydraulic-conductivity values of the upper part of the Santa Fe Group resulting from the model calibrated by trial and error varied by zone in the model and ranged from 12 to 33 feet per day. The hydraulic conductivity of the inner-valley alluvium was 45 feet per day. The vertical to horizontal anisotropy ratio was 1:140. Specific storage was 4 x 10-6 per foot of aquifer thickness, and specific yield was 0.15 (dimensionless). The sum of squared errors between the observed and simulated drawdowns was 130 feet squared. Not all aquifer properties could be estimated using nonlinear regression because of model insensitivity to some aquifer properties at observation locations. Hydraulic conductivity of the inner-valley alluvium, middle part of the Santa Fe Group, and riverbed and riverside-drain bed and specific yield had low sensitivity values and therefore could not be estimated. Of the properties estimated, hydraulic conductivity of the upper part of the Santa Fe Group was estimated to be 12 feet per day, the vertical to horizontal anisotropy ratio was estimated to be 1:82, and specific storage was estimated to be 1.2 x 10-6 per foot of aquifer thickness. The overall sum of squared errors between the observed and simulated drawdowns was 87 feet squared, a significant improvement over the model calibrated by trial and error. At the end of aquifer-test pumping, induced infiltration from the Rio Grande and riverside drains was simulated to be 13 percent of the total amount of water pumped. The remainder was water removed from aquifer storage. After pumping stopped, induced infiltration continued to replenish aquifer storage. Simulations estimated that 5 years after pumping began (about 4.85 years after pumping stopped), 58 to 72 percent of the total amount of water pumped was replenished by induced infiltration from the Rio Grande surface-water system.
Air Pollution Exposure Modeling for Epidemiology Studies and Public Health
Air pollution epidemiology studies of ambient fine particulate matter (PM2.5) often use outdoor concentrations as exposure surrogates. These surrogates can induce exposure error since they do not account for (1) time spent indoors with ambient PM2.5 levels attenuated from outdoor...
Draguta, Sergiu; Sharia, Onise; Yoon, Seog Joon; Brennan, Michael C; Morozov, Yurii V; Manser, Joseph S; Kamat, Prashant V; Schneider, William F; Kuno, Masaru
2018-01-11
The original version of this Article contained an error in the spelling of the author Joseph S. Manser, which was incorrectly given as Joseph M. Manser. This has now been corrected in both the PDF and HTML versions of the Article.
Cues that Trigger Social Transmission of Disinhibition in Young Children
ERIC Educational Resources Information Center
Moriguchi, Yusuke; Minato, Takashi; Ishiguro, Hiroshi; Shinohara, Ikuko; Itakura, Shoji
2010-01-01
Previous studies have shown that observing a human model's actions, but not a robot's actions, could induce young children's perseverative behaviors and suggested that children's sociocognitive abilities can lead to perseverative errors ("social transmission of disinhibition"). This study investigated how the social transmission of disinhibition…
Thomas, S; Schaeffel, F
2000-01-01
It is not clear whether emmetropization is confined to spherical refractive errors, or whether astiqmatic errors are also corrected via visual feedback. Experimental results from the animal model of the chicken are equivocal since compensation of imposed astimatic defocus was found in some but not all studies. Astigmatism could only be compensated by changes in the geometry of the cornea or lens. One has tested whether astigmatic spectacle lenses induce astigmatic accommodation as a possible first step of long-lasting compensation. Thirty-five chickens were treated with cylinder lenses (+3/0D or -3/0D) for 5 h. Refractions were determined at 1.38 m distance without cycloplegia in hand-held chicks before attaching the lenses, with the lenses on (0 h), and after 3 and 5 h, and after removal of the lenses. Spheres (S), cylinders (C) and axes (A) were determined using infrared photoretinocopy in three axes (the 'PowerRefractor', equipped with a 135 mm lens). (1) The performance of the 'PowerRefractor' was tested in the chickens with trial lenses and gave correct refractions. (2) Astigmatic trial lenses induced refractive errors as expected from their powers in the case of +3/0D lenses: (S) +3.26 +/- 0.93D, (C) -3.45 +/- 0.87D). In the case of -3/0D lenses, slightly more hyperopic spheres were induced (refractions (S) +4.5 +/- 0.48D) but the cylinders were still as expected (-3.25 +/- 0.49D). The axes of astigmatism were correctly reproduced, since rotating the lenses changed the axes of the induced cylinders as expected. (3) Neither after 3 nor after 5 h of lens wear were there significant changes in the axes or the magnitude of astigmatism. Directly after removal of the lens, the refractions did not differ from their start-up values (with +3/0D lenses: (S) +3.31 +/- 1.05D vs. +3.22 +/- 0.76D, (C) -1.19 +/- 1.77D vs. -0.65 +/- 0.94D, (A) 96 +/- 49 vs. 113 +/- 45 deg; with -3/0D lenses: (S) 2.63 +/- 1.12D vs. 2.97 +/- 0.94D, (C) -1.11 +/- 1.15D vs. -0.53 +/- 0.56D, (A) 78 +/- 24 vs. 131 +/- 35 deg). The most intuitive mechanism for compensation of astigmatic refractive errors, astigmatic accommodation, could not be demonstrated in chickens. In light of this finding, it seems unlikely that a visually controlled mechanism is operating during development to reduced astigmatism by changing corneal or lenticular growth.
NASA Astrophysics Data System (ADS)
Lin, Z.; Kim-Hak, D.; Popp, B. N.; Wallsgrove, N.; Kagawa-Viviani, A.; Johnson, J.
2017-12-01
Cavity ring-down spectroscopy (CRDS) is a technology based on the spectral absorption of gas molecules of interest at specific spectral regions. The CRDS technique enables the analysis of hydrogen and oxygen stable isotope ratios of water by directly measuring individual isotopologue absorption peaks such as H16OH, H18OH, and D16OH. Early work demonstrated that the accuracy of isotope analysis by CRDS and other laser-based absorption techniques could be compromised by spectral interference from organic compounds, in particular methanol and ethanol, which can be prevalent in ecologically-derived waters. There have been several methods developed by various research groups including Picarro to address the organic interference challenge. Here, we describe an organic fitter and a post-processing algorithm designed to improve the accuracy of the isotopic analysis of the "organic contaminated" water specifically for Picarro models L2130-i and L2140-i. To create the organic fitter, the absorption features of methanol around 7200 cm-1 were characterized and incorporated into spectral analysis. Since there was residual interference remaining after applying the organic fitter, a statistical model was also developed for post-processing correction. To evaluate the performance of the organic fitter and the postprocessing correction, we conducted controlled experiments on the L2130-i for two water samples with different isotope ratios blended with varying amounts of methanol (0-0.5%) and ethanol (0-5%). When the original fitter was not used for spectral analysis, the addition of 0.5% methanol changed the apparent isotopic composition of the water samples by +62‰ for δ18O values and +97‰ for δ2H values, and the addition of 5% ethanol changed the apparent isotopic composition by -0.5‰ for δ18O values and -3‰ for δ2H values. When the organic fitter was used for spectral analysis, the maximum methanol-induced errors were reduced to +4‰ for δ18O values and +5‰ for δ2H values, and the maximum ethanol-induced errors were unchanged. When the organic fitter was combined with the post-processing correction, up to 99.8% of the total methanol-induced errors and 96% of the total ethanol-induced errors could be corrected. The applicability of the algorithm to natural samples such as plant and soil waters will be investigated.
A matter of emphasis: Linguistic stress habits modulate serial recall.
Taylor, John C; Macken, Bill; Jones, Dylan M
2015-04-01
Models of short-term memory for sequential information rely on item-level, feature-based descriptions to account for errors in serial recall. Transposition errors within alternating similar/dissimilar letter sequences derive from interactions between overlapping features. However, in two experiments, we demonstrated that the characteristics of the sequence are what determine the fates of items, rather than the properties ascribed to the items themselves. Performance in alternating sequences is determined by the way that the sequences themselves induce particular prosodic rehearsal patterns, and not by the nature of the items per se. In a serial recall task, the shapes of the canonical "saw-tooth" serial position curves and transposition error probabilities at successive input-output distances were modulated by subvocal rehearsal strategies, despite all item-based parameters being held constant. We replicated this finding using nonalternating lists, thus demonstrating that transpositions are substantially influenced by prosodic features-such as stress-that emerge during subvocal rehearsal.
Low target prevalence is a stubborn source of errors in visual search tasks
Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour
2009-01-01
In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitzpatrick, Richard
2007-09-24
Dr. Fitzpatrick has written an MHD code in order to investigate the interaction of tearing modes with flow and external magnetic perturbations, which has been successfully benchmarked against both linear and nonlinear theory and used to investigate error-field penetration in flowing plasmas. The same code was used to investigate the so-called Taylor problem. He employed the University of Chicago's FLASH code to further investigate the Taylor problem, discovering a new aspect of the problem. Dr. Fitzpatrick has written a 2-D Hall MHD code and used it to investigate the collisionless Taylor problem. Dr. Waelbroeck has performed an investigation of themore » scaling of the error-field penetration threshold in collisionless plasmas. Paul Watson and Dr. Fitzpatrick have written a fully-implicit extended-MHD code using the PETSC framework. Five publications have resulted from this grant work.« less
NASA Astrophysics Data System (ADS)
Balaji, K. A.; Prabu, K.
2018-03-01
There is an immense demand for high bandwidth and high data rate systems, which is fulfilled by wireless optical communication or free space optics (FSO). Hence FSO gained a pivotal role in research which has a added advantage of both cost-effective and licence free huge bandwidth. Unfortunately the optical signal in free space suffers from irradiance and phase fluctuations due to atmospheric turbulence and pointing errors which deteriorates the signal and degrades the performance of communication system over longer distance which is undesirable. In this paper, we have considered polarization shift keying (POLSK) system applied with wavelength and time diversity technique over Malaga(M)distribution to mitigate turbulence induced fading. We derived closed form mathematical expressions for estimating the systems outage probability and average bit error rate (BER). Ultimately from the results we can infer that wavelength and time diversity schemes enhances these systems performance.
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truong, Samson S.
2014-01-01
Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of Multi Utility Technology Test Bed, X-56A, aircraft is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of X-56A. The ground vibration test validated structural dynamic finite element model of the X-56A is created in this study. The structural dynamic finite element model of the X-56A is improved using a model tuning tool. In this study, two different weight configurations of the X-56A have been improved in a single optimization run.
Stereotype threat can reduce older adults' memory errors
Barber, Sarah J.; Mather, Mara
2014-01-01
Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment (Seibt & Förster, 2004). Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 & 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well. PMID:24131297
Dynamic changes in brain activity during prism adaptation.
Luauté, Jacques; Schwartz, Sophie; Rossetti, Yves; Spiridon, Mona; Rode, Gilles; Boisson, Dominique; Vuilleumier, Patrik
2009-01-07
Prism adaptation does not only induce short-term sensorimotor plasticity, but also longer-term reorganization in the neural representation of space. We used event-related fMRI to study dynamic changes in brain activity during both early and prolonged exposure to visual prisms. Participants performed a pointing task before, during, and after prism exposure. Measures of trial-by-trial pointing errors and corrections allowed parametric analyses of brain activity as a function of performance. We show that during the earliest phase of prism exposure, anterior intraparietal sulcus was primarily implicated in error detection, whereas parieto-occipital sulcus was implicated in error correction. Cerebellum activity showed progressive increases during prism exposure, in accordance with a key role for spatial realignment. This time course further suggests that the cerebellum might promote neural changes in superior temporal cortex, which was selectively activated during the later phase of prism exposure and could mediate the effects of prism adaptation on cognitive spatial representations.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; ...
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and themore » trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. Ultimately, these results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.« less
Dominant Drivers of GCMs Errors in the Simulation of South Asian Summer Monsoon
NASA Astrophysics Data System (ADS)
Ashfaq, Moetasim
2017-04-01
Accurate simulation of the South Asian summer monsoon (SAM) is a longstanding unresolved problem in climate modeling science. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to demonstrate that most of the simulation errors in the summer season and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation over land further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.
Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model
NASA Astrophysics Data System (ADS)
Tang, Jingshi; Liu, Lin; Miao, Manqian
Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.
Overview of medical errors and adverse events
2012-01-01
Safety is a global concept that encompasses efficiency, security of care, reactivity of caregivers, and satisfaction of patients and relatives. Patient safety has emerged as a major target for healthcare improvement. Quality assurance is a complex task, and patients in the intensive care unit (ICU) are more likely than other hospitalized patients to experience medical errors, due to the complexity of their conditions, need for urgent interventions, and considerable workload fluctuation. Medication errors are the most common medical errors and can induce adverse events. Two approaches are available for evaluating and improving quality-of-care: the room-for-improvement model, in which problems are identified, plans are made to resolve them, and the results of the plans are measured; and the monitoring model, in which quality indicators are defined as relevant to potential problems and then monitored periodically. Indicators that reflect structures, processes, or outcomes have been developed by medical societies. Surveillance of these indicators is organized at the hospital or national level. Using a combination of methods improves the results. Errors are caused by combinations of human factors and system factors, and information must be obtained on how people make errors in the ICU environment. Preventive strategies are more likely to be effective if they rely on a system-based approach, in which organizational flaws are remedied, rather than a human-based approach of encouraging people not to make errors. The development of a safety culture in the ICU is crucial to effective prevention and should occur before the evaluation of safety programs, which are more likely to be effective when they involve bundles of measures. PMID:22339769
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
NASA Astrophysics Data System (ADS)
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2017-07-01
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.
Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D
NASA Astrophysics Data System (ADS)
La Haye, R. J.; Paz-Soldan, C.; Strait, E. J.
2015-02-01
DIII-D experiments show that fully penetrated resonant n = 1 error field locked modes in ohmic plasmas with safety factor q95 ≳ 3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n = 2/1) static error fields are shielded in ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption. Error field correction (EFC) is performed on DIII-D (in ITER relevant shape and safety factor q95 ≳ 3) with either the n = 1 C-coil (no handedness) or the n = 1 I-coil (with ‘dominantly’ resonant field pitch). Despite EFC, which allows significantly lower plasma density (a ‘figure of merit’) before penetration occurs, the resulting saturated islands have similar large size; they differ only in the phase of the locked mode after typically being pulled (by up to 30° toroidally) in the electron diamagnetic drift direction as they grow to saturation. Island amplification and phase shift are explained by a second change-of-state in which the classical tearing index changes from stable to marginal by the presence of the island, which changes the current density profile. The eventual island size is thus governed by the inherent stability and saturation mechanism rather than the driving error field.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and themore » trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G.
Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 tomore » 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot spots along the field edges, which may be near critical structures. However, random PE showed minimal dose error. Conclusions: Dose error dependence for PE was quantitatively and systematically characterized and an analytic tool was built to simulate systematic and random errors for patient-specific IMPT. This information facilitates the determination of facility specific spot position error thresholds.« less
Schipler, Agnes; Iliakis, George
2013-09-01
Although the DNA double-strand break (DSB) is defined as a rupture in the double-stranded DNA molecule that can occur without chemical modification in any of the constituent building blocks, it is recognized that this form is restricted to enzyme-induced DSBs. DSBs generated by physical or chemical agents can include at the break site a spectrum of base alterations (lesions). The nature and number of such chemical alterations define the complexity of the DSB and are considered putative determinants for repair pathway choice and the probability that errors will occur during this processing. As the pathways engaged in DSB processing show distinct and frequently inherent propensities for errors, pathway choice also defines the error-levels cells opt to accept. Here, we present a classification of DSBs on the basis of increasing complexity and discuss how complexity may affect processing, as well as how it may cause lethal or carcinogenic processing errors. By critically analyzing the characteristics of DSB repair pathways, we suggest that all repair pathways can in principle remove lesions clustering at the DSB but are likely to fail when they encounter clusters of DSBs that cause a local form of chromothripsis. In the same framework, we also analyze the rational of DSB repair pathway choice.
NASA Astrophysics Data System (ADS)
Upadhya, Abhijeet; Dwivedi, Vivek K.; Singh, G.
2018-06-01
In this paper, we have analyzed the performance of dual hop radio frequency (RF)/free-space optical (FSO) fixed gain relay environment confined by atmospheric turbulence induced fading channel over FSO link and modeled using α - μ distribution. The RF hop of the amplify-and-forward scheme undergoes the Rayleigh fading and the proposed system model also considers the pointing error effect on the FSO link. A novel and accurate mathematical expression of the probability density function for a FSO link experiencing α - μ distributed atmospheric turbulence in the presence of pointing error is derived. Further, we have presented analytical expressions of outage probability and bit error rate in terms of Meijer-G function. In addition to this, a useful and mathematically tractable closed-form expression for the end-to-end ergodic capacity of the dual hop scheme in terms of bivariate Fox's H function is derived. The atmospheric turbulence, misalignment errors and various binary modulation schemes for intensity modulation on optical wireless link are considered to yield the results. Finally, we have analyzed each of the three performance metrics for high SNR in order to represent them in terms of elementary functions and the achieved analytical results are supported by computer-based simulations.
NASA Astrophysics Data System (ADS)
Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur
2018-03-01
Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan
2015-05-13
This paper introduces a novel afocal optical flow sensor (OFS) system for odometry estimation in indoor robotic navigation. The OFS used in computer optical mouse has been adopted for mobile robots because it is not affected by wheel slippage. Vertical height variance is thought to be a dominant factor in systematic error when estimating moving distances in mobile robots driving on uneven surfaces. We propose an approach to mitigate this error by using an afocal (infinite effective focal length) system. We conducted experiments in a linear guide on carpet and three other materials with varying sensor heights from 30 to 50 mm and a moving distance of 80 cm. The same experiments were repeated 10 times. For the proposed afocal OFS module, a 1 mm change in sensor height induces a 0.1% systematic error; for comparison, the error for a conventional fixed-focal-length OFS module is 14.7%. Finally, the proposed afocal OFS module was installed on a mobile robot and tested 10 times on a carpet for distances of 1 m. The average distance estimation error and standard deviation are 0.02% and 17.6%, respectively, whereas those for a conventional OFS module are 4.09% and 25.7%, respectively.
Tran, Nina; Chiu, Sara; Tian, Yibin; Wildsoet, Christine F.
2009-01-01
Purpose This study sought further insight into the stimulus dependence of form deprivation myopia, a common response to retinal image degradation in young animals. Methods Each of 4 Bangerter diffusing filters (0.6, 0.1, <0.1, and LP (light perception only)) combined with clear plano lenses, as well as plano lenses alone, were fitted monocularly to 4-day-old chicks. Axial ocular dimensions and refractive errors were monitored over a 14-day treatment period, using high frequency A-scan ultrasonography and an autorefractor, respectively. Results Only the <0.1 and LP filters induced significant form deprivation myopia; these filters induced similarly large myopic shifts in refractive error (mean interocular differences ±SEM: -9.92 ±1.99, -7.26 ± 1.60 D respectively), coupled to significant increases in both vitreous chamber depths and optical axial lengths (p<0.001). The other 3 groups showed comparable, small changes in their ocular dimensions (p>0.05), and only small myopic shifts in refraction (<3.00 D). The myopia-inducing filters eliminated mid-and-high spatial frequency information. Conclusions Our results are consistent with emmetropization being tuned to mid-spatial frequencies. They also imply that form deprivation is not a graded phenomenon. PMID:18533221
Evaporation, precipitation, and associated salinity changes at a humid, subtropical estuary
Sumner, D.M.; Belaineh, G.
2005-01-01
The distilling effect of evaporation and the diluting effect of precipitation on salinity at two estuarine sites in the humid subtropical setting of the Indian River Lagoon, Florida, were evaluated based on daily evaporation computed with an energy-budget method and measured precipitation. Despite the larger magnitude of evaporation (about 1,580 mm yr-1) compared to precipitation (about 1,180 mm yr-1) between February 2002 and January 2004, the variability of monthly precipitation induced salinity changes was more than twice the variability of evaporation induced changes. Use of a constant, mean value of evaporation, along with measured values of daily precipitation, were sufficient to produce simulated salinity changes that contained little monthly (root-mean-square error = 0.33??? mo-1 and 0.52??? mo-1 at the two sites) or cumulative error (<1??? yr-1) compared to simulations that used computed daily values of evaporation. This result indicates that measuring the temporal variability in evaporation may not be critical to simulation of salinity within the lagoon. Comparison of evaporation and precipitation induced salinity changes with measured salinity changes indicates that evaporation and precipitation explained only 4% of the changes in salinity within a flow-through area of the lagoon; surface water and ocean inflows probably accounted for most of the variability in salinity at this site. Evaporation and precipitation induced salinity changes explained 61% of the variability in salinity at a flow-restricted part of the lagoon. ?? 2005 Estuarine Research Federation.
Luther, Stefan; Singh, Rupinder; Gilmour, Robert F.
2010-01-01
The pattern of action potential propagation during various tachyarrhythmias is strongly suspected to be composed of multiple re-entrant waves, but has never been imaged in detail deep within myocardial tissue. An understanding of the nature and dynamics of these waves is important in the development of appropriate electrical or pharmacological treatments for these pathological conditions. We propose a new imaging modality that uses ultrasound to visualize the patterns of propagation of these waves through the mechanical deformations they induce. The new method would have the distinct advantage of being able to visualize these waves deep within cardiac tissue. In this article, we describe one step that would be necessary in this imaging process—the conversion of these deformations into the action potential induced active stresses that produced them. We demonstrate that, because the active stress induced by an action potential is, to a good approximation, only nonzero along the local fiber direction, the problem in our case is actually overdetermined, allowing us to obtain a complete solution. Use of two- rather than three-dimensional displacement data, noise in these displacements, and/or errors in the measurements of the fiber orientations all produce substantial but acceptable errors in the solution. We conclude that the reconstruction of action potential-induced active stress from the deformation it causes appears possible, and that, therefore, the path is open to the development of the new imaging modality. PMID:20499183
Critical Care Performance in a Simulated Military Aircraft Cabin Environment.
McNeill, Margaret M
2018-04-01
Critical Care Air Transport Teams care for 5% to 10% of injured patients who are transported on military aircraft to definitive treatment facilities. Little is known about how the aeromedical evacuation environment affects care. To determine the effects of 2 stressors of flight, altitude-induced hypoxia and aircraft noise, and to examine the contributions of fatigue and clinical experience on cognitive and physiological performance of the Critical Care Air Transport Team. This repeated measures 2 × 2 × 4 factorial study included 60 military nurses. The participants completed a simulated patient care scenario under aircraft cabin noise and altitude conditions. Differences in cognitive and physiological performance were analyzed using repeated measures analysis of variance. A multiple regression model was developed to determine the independent contributions of fatigue and clinical experience. Critical care scores ( P = .02) and errors and omissions ( P = .047) were negatively affected by noise. Noise was associated with increased respiratory rate ( P = .02). Critical care scores ( P < .001) and errors and omissions ( P = .002) worsened with altitude-induced hypoxemia. Heart rate and respiratory rate increased with altitude-induced hypoxemia; oxygen saturation decreased ( P < .001 for all 3 variables). In a simulated military aircraft environment, the care of critically ill patients was significantly affected by noise and altitude-induced hypoxemia. The participants did not report much fatigue and experience did not play a role, contrary to most findings in the literature. ©2018 American Association of Critical-Care Nurses.
Gene Profiling in Experimental Models of Eye Growth: Clues to Myopia Pathogenesis
Stone, Richard A.; Khurana, Tejvir S.
2010-01-01
To understand the complex regulatory pathways that underlie the development of refractive errors, expression profiling has evaluated gene expression in ocular tissues of well-characterized experimental models that alter postnatal eye growth and induce refractive errors. Derived from a variety of platforms (e.g. differential display, spotted microarrays or Affymetrix GeneChips), gene expression patterns are now being identified in species that include chicken, mouse and primate. Reconciling available results is hindered by varied experimental designs and analytical/statistical features. Continued application of these methods offers promise to provide the much-needed mechanistic framework to develop therapies to normalize refractive development in children. PMID:20363242
Opto-mechanical design of a dispersive artificial eye.
Coughlan, Mark F; Mihashi, Toshifumi; Goncharov, Alexander V
2017-05-20
We present an opto-mechanical artificial eye that can be used for examining multi-wavelength ophthalmic instruments. Standard off-the-shelf lenses and a refractive-index-matching fluid were used in the creation of the artificial eye. In addition to dispersive properties, the artificial eye can be used to simulate refractive error. To analyze the artificial eye, a multi-wavelength Hartmann-Shack aberrometer was used to measure the longitudinal chromatic aberration and the possibility of inducing refractive error. Off-axis chromatic aberrations were also analyzed by imaging through the artificial eye at two discrete wavelengths. Possible extensions to the dispersive artificial eye are also discussed.
Criticality of Adaptive Control Dynamics
NASA Astrophysics Data System (ADS)
Patzelt, Felix; Pawelzik, Klaus
2011-12-01
We show, that stabilization of a dynamical system can annihilate observable information about its structure. This mechanism induces critical points as attractors in locally adaptive control. It also reveals, that previously reported criticality in simple controllers is caused by adaptation and not by other controller details. We apply these results to a real-system example: human balancing behavior. A model of predictive adaptive closed-loop control subject to some realistic constraints is introduced and shown to reproduce experimental observations in unprecedented detail. Our results suggests, that observed error distributions in between the Lévy and Gaussian regimes may reflect a nearly optimal compromise between the elimination of random local trends and rare large errors.
Analysis of all-optical temporal integrator employing phased-shifted DFB-SOA.
Jia, Xin-Hong; Ji, Xiao-Ling; Xu, Cong; Wang, Zi-Nan; Zhang, Wei-Li
2014-11-17
All-optical temporal integrator using phase-shifted distributed-feedback semiconductor optical amplifier (DFB-SOA) is investigated. The influences of system parameters on its energy transmittance and integration error are explored in detail. The numerical analysis shows that, enhanced energy transmittance and integration time window can be simultaneously achieved by increased injected current in the vicinity of lasing threshold. We find that the range of input pulse-width with lower integration error is highly sensitive to the injected optical power, due to gain saturation and induced detuning deviation mechanism. The initial frequency detuning should also be carefully chosen to suppress the integration deviation with ideal waveform output.
EUV via hole pattern fidelity enhancement through novel resist and post-litho plasma treatment
NASA Astrophysics Data System (ADS)
Yaegashi, Hidetami; Koike, Kyohei; Fonseca, Carlos; Yamashita, Fumiko; Kaushik, Kumar; Morikita, Shinya; Ito, Kiyohito; Yoshimura, Shota; Timoshkov, Vadim; Maslow, Mark; Jee, Tae Kwon; Reijnen, Liesbeth; Choi, Peter; Feng, Mu; Spence, Chris; Schoofs, Stijn
2018-03-01
Extreme UV(EUV) technology must be potential solution for sustainable scaling, and its adoption in high volume manufacturing(HVM) is getting realistic more and more. This technology has a wide capability to mitigate various technical problem in Multi-patterning (LELELE) for via hole patterning with 193-i. It induced local pattern fidelity error such like CDU, CER, Pattern placement error. Exactly, EUV must be desirable scaling-driving tool, however, specific technical issue, named RLS (Resolution-LER-Sensitivity) triangle, obvious remaining issue. In this work, we examined hole patterning sensitizing (Lower dose approach) utilizing hole patterning restoration technique named "CD-Healing" as post-Litho. treatment.
Bubalo, Joseph; Warden, Bruce A; Wiegel, Joshua J; Nishida, Tess; Handel, Evelyn; Svoboda, Leanne M; Nguyen, Lam; Edillo, P Neil
2014-12-01
Medical errors, in particular medication errors, continue to be a troublesome factor in the delivery of safe and effective patient care. Antineoplastic agents represent a group of medications highly susceptible to medication errors due to their complex regimens and narrow therapeutic indices. As the majority of these medication errors are frequently associated with breakdowns in poorly defined systems, developing technologies and evolving workflows seem to be a logical approach to provide added safeguards against medication errors. This article will review both the pros and cons of today's technologies and their ability to simplify the medication use process, reduce medication errors, improve documentation, improve healthcare costs and increase provider efficiency as relates to the use of antineoplastic therapy throughout the medication use process. Several technologies, mainly computerized provider order entry (CPOE), barcode medication administration (BCMA), smart pumps, electronic medication administration record (eMAR), and telepharmacy, have been well described and proven to reduce medication errors, improve adherence to quality metrics, and/or improve healthcare costs in a broad scope of patients. The utilization of these technologies during antineoplastic therapy is weak at best and lacking for most. Specific to the antineoplastic medication use system, the only technology with data to adequately support a claim of reduced medication errors is CPOE. In addition to the benefits these technologies can provide, it is also important to recognize their potential to induce new types of errors and inefficiencies which can negatively impact patient care. The utilization of technology reduces but does not eliminate the potential for error. The evidence base to support technology in preventing medication errors is limited in general but even more deficient in the realm of antineoplastic therapy. Though CPOE has the best evidence to support its use in the antineoplastic population, benefit from many other technologies may have to be inferred based on data from other patient populations. As health systems begin to widely adopt and implement new technologies it is important to critically assess their effectiveness in improving patient safety. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Step-wise refolding of recombinant proteins.
Tsumoto, Kouhei; Arakawa, Tsutomu; Chen, Linda
2010-04-01
Protein refolding is still on trial-and-error basis. Here we describe step-wise dialysis refolding, in which denaturant concentration is altered in step-wise fashion. This technology controls the folding pathway by adjusting the concentrations of the denaturant and other solvent additives to induce sequential folding or disulfide formation.
Device for wavefront correction in an ultra high power laser
Ault, Earl R.; Comaskey, Brian J.; Kuklo, Thomas C.
2002-01-01
A system for wavefront correction in an ultra high power laser. As the laser medium flows past the optical excitation source and the fluid warms its index of refraction changes creating an optical wedge. A system is provided for correcting the thermally induced optical phase errors.
Posture Recognition in Alzheimer's Disease
ERIC Educational Resources Information Center
Mozaz, Maria; Garaigordobil, Maite; Rothi, Leslie J. Gonzalez; Anderson, Jeffrey; Crucian, Gregory P.; Heilman, Kenneth M.
2006-01-01
Background: Apraxia is neurologically induced deficit in the ability perform purposeful skilled movements. One of the most common forms is ideomotor apraxia (IMA) where spatial and temporal production errors are most prevalent. IMA can be associated Alzheimer's disease (AD), even early in its course, but is often not identified possibly because…
Metal flame spray coating protects electrical cables in extreme environment
NASA Technical Reports Server (NTRS)
Brady, R. D.; Fox, H. A.
1967-01-01
Metal flame spray coating prevents EMF measurement error in sheathed instrumentation cables which are externally attached to cylinders which were cooled on the inside, but exposed to gamma radiation on the outside. The coating provides a thermoconductive path for radiation induced high temperatures within the cables.
2002-12-01
applications, vibration sources are numerous such as: ! Launch Loading ! Man-induced accelerations like on the Shuttle or space station ! Solar ...However, the lack of significant tracking errors during times when other actuators were stationary, and the fact that the local maximum tracking...
Evaluation of the CATSIB DIF Procedure in a Pretest Setting
ERIC Educational Resources Information Center
Nandakumar, Ratna; Roussos, Louis
2004-01-01
A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The…
Pavone, Enea Francesco; Tieri, Gaetano; Rizza, Giulia; Tidoni, Emmanuele; Grisoni, Luigi; Aglioti, Salvatore Maria
2016-01-13
Brain monitoring of errors in one's own and other's actions is crucial for a variety of processes, ranging from the fine-tuning of motor skill learning to important social functions, such as reading out and anticipating the intentions of others. Here, we combined immersive virtual reality and EEG recording to explore whether embodying the errors of an avatar by seeing it from a first-person perspective may activate the error monitoring system in the brain of an onlooker. We asked healthy participants to observe, from a first- or third-person perspective, an avatar performing a correct or an incorrect reach-to-grasp movement toward one of two virtual mugs placed on a table. At the end of each trial, participants reported verbally how much they embodied the avatar's arm. Ratings were maximal in first-person perspective, indicating that immersive virtual reality can be a powerful tool to induce embodiment of an artificial agent, even through mere visual perception and in the absence of any cross-modal boosting. Observation of erroneous grasping from a first-person perspective enhanced error-related negativity and medial-frontal theta power in the trials where human onlookers embodied the virtual character, hinting at the tight link between early, automatic coding of error detection and sense of embodiment. Error positivity was similar in 1PP and 3PP, suggesting that conscious coding of errors is similar for self and other. Thus, embodiment plays an important role in activating specific components of the action monitoring system when others' errors are coded as if they are one's own errors. Detecting errors in other's actions is crucial for social functions, such as reading out and anticipating the intentions of others. Using immersive virtual reality and EEG recording, we explored how the brain of an onlooker reacted to the errors of an avatar seen from a first-person perspective. We found that mere observation of erroneous actions enhances electrocortical markers of error detection in the trials where human onlookers embodied the virtual character. Thus, the cerebral system for action monitoring is maximally activated when others' errors are coded as if they are one's own errors. The results have important implications for understanding how the brain can control the external world and thus creating new brain-computer interfaces. Copyright © 2016 the authors 0270-6474/16/360268-12$15.00/0.
Patton, James L; Stoykov, Mary Ellen; Kovic, Mark; Mussa-Ivaldi, Ferdinando A
2006-01-01
This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate "adaptive training." Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable "after-effect." A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion--either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niven, W.A.
The long-term position accuracy of an inertial navigation system depends primarily on the ability of the gyroscopes to maintain a near-perfect reference orientation. Small imperfections in the gyroscopes cause them to drift slowly away from their initial orientation, thereby producing errors in the system's calculations of position. The A3FIX is a computer program subroutine developed to estimate inertial navigation system gyro drift rates with the navigator stopped or moving slowly. It processes data of the navigation system's position error to arrive at estimates of the north- south and vertical gyro drift rates. It also computes changes in the east--west gyromore » drift rate if the navigator is stopped and if data on the system's azimuth error changes are also available. The report describes the subroutine, its capabilities, and gives examples of gyro drift rate estimates that were computed during the testing of a high quality inertial system under the PASSPORT program at the Lawrence Livermore Laboratory. The appendices provide mathematical derivations of the estimation equations that are used in the subroutine, a discussion of the estimation errors, and a program listing and flow diagram. The appendices also contain a derivation of closed form solutions to the navigation equations to clarify the effects that motion and time-varying drift rates induce in the phase-plane relationships between the Schulerfiltered errors in latitude and azimuth snd between the Schulerfiltered errors in latitude and longitude. (auth)« less
NASA Astrophysics Data System (ADS)
Peres, David J.; Cancelliere, Antonino; Greco, Roberto; Bogaard, Thom A.
2018-03-01
Uncertainty in rainfall datasets and landslide inventories is known to have negative impacts on the assessment of landslide-triggering thresholds. In this paper, we perform a quantitative analysis of the impacts of uncertain knowledge of landslide initiation instants on the assessment of rainfall intensity-duration landslide early warning thresholds. The analysis is based on a synthetic database of rainfall and landslide information, generated by coupling a stochastic rainfall generator and a physically based hydrological and slope stability model, and is therefore error-free in terms of knowledge of triggering instants. This dataset is then perturbed according to hypothetical reporting scenarios
that allow simulation of possible errors in landslide-triggering instants as retrieved from historical archives. The impact of these errors is analysed jointly using different criteria to single out rainfall events from a continuous series and two typical temporal aggregations of rainfall (hourly and daily). The analysis shows that the impacts of the above uncertainty sources can be significant, especially when errors exceed 1 day or the actual instants follow the erroneous ones. Errors generally lead to underestimated thresholds, i.e. lower than those that would be obtained from an error-free dataset. Potentially, the amount of the underestimation can be enough to induce an excessive number of false positives, hence limiting possible landslide mitigation benefits. Moreover, the uncertain knowledge of triggering rainfall limits the possibility to set up links between thresholds and physio-geographical factors.
Discrimination of corn from monocotyledonous weeds with ultraviolet (UV) induced fluorescence.
Panneton, Bernard; Guillaume, Serge; Samson, Guy; Roger, Jean-Michel
2011-01-01
In production agriculture, savings in herbicides can be achieved if weeds can be discriminated from crop, allowing the targeting of weed control to weed-infested areas only. Previous studies demonstrated the potential of ultraviolet (UV) induced fluorescence to discriminate corn from weeds and recently, robust models have been obtained for the discrimination between monocots (including corn) and dicots. Here, we developed a new approach to achieve robust discrimination of monocot weeds from corn. To this end, four corn hybrids (Elite 60T05, Monsanto DKC 26-78, Pioneer 39Y85 (RR), and Syngenta N2555 (Bt, LL)) and four monocot weeds (Digitaria ischaemum (Schreb.) I, Echinochloa crus-galli (L.) Beauv., Panicum capillare (L.), and Setaria glauca (L.) Beauv.) were grown either in a greenhouse or in a growth cabinet and UV (327 nm) induced fluorescence spectra (400 to 755 nm) were measured under controlled or uncontrolled ambient light intensity and temperature. This resulted in three contrasting data sets suitable for testing the robustness of discrimination models. In the blue-green region (400 to 550 nm), the shape of the spectra did not contain any useful information for discrimination. Therefore, the integral of the blue-green region (415 to 455 nm) was used as a normalizing factor for the red fluorescence intensity (670 to 755 nm). The shape of the normalized red fluorescence spectra did not contribute to the discrimination and in the end, only the integral of the normalized red fluorescence intensity was left as a single discriminant variable. Applying a threshold on this variable minimizing the classification error resulted in calibration errors ranging from 14.2% to 15.8%, but this threshold varied largely between data sets. Therefore, to achieve robustness, a model calibration scheme was developed based on the collection of a calibration data set from 75 corn plants. From this set, a new threshold can be estimated as the 85% quantile on the cumulative frequency curve of the integral of the normalized red fluorescence. With this approach the classification error was nearly constant (16.0% to 18.5%), thereby indicating the potential of UV-induced fluorescence to reliably discriminate corn from monocot weeds.
Global seasonal strain and stress models derived from GRACE loading, and their impact on seismicity
NASA Astrophysics Data System (ADS)
Chanard, K.; Fleitout, L.; Calais, E.; Craig, T. J.; Rebischung, P.; Avouac, J. P.
2017-12-01
Loading by continental water, atmosphere and oceans deforms the Earth at various spatio-temporal scales, inducing crustal and mantelic stress perturbations that may play a role in earthquake triggering.Deformation of the Earth by this surface loading is observed in GNSS position time series. While various models predict well vertical observations, explaining horizontal displacements remains challenging. We model the elastic deformation induced by loading derived from GRACE for coefficients 2 and higher. We estimate the degree-1 deformation field by comparison between predictions of our model and IGS-repro2 solutions at a globally distributed network of 700 GNSS sites, separating the horizontal and vertical components to avoid biases between components. The misfit between model and data is reduced compared to previous studies, particularly on the horizontal component. The associated geocenter motion time series are consistent with results derived from other datasets. We also discuss the impact on our results of systematic errors in GNSS geodetic products, in particular of the draconitic error.We then compute stress tensors time series induced by GRACE loads and discuss the potential link between large scale seasonal mass redistributions and seismicity. Within the crust, we estimate hydrologically induced stresses in the intraplate New Madrid Seismic Zone, where secular stressing rates are unmeasurably low. We show that a significant variation in the rate of micro-earthquakes at annual and multi-annual timescales coincides with stresses induced by hydrological loading in the upper Mississippi embayment, with no significant phase-lag, directly modulating regional seismicity. We also investigate pressure variations in the mantle transition zone and discuss potential correlations between the statistically significant observed seasonality of deep-focus earthquakes, most likely due to mineralogical transformations, and surface hydrological loading.
Neale, Chris; Madill, Chris; Rauscher, Sarah; Pomès, Régis
2013-08-13
All molecular dynamics simulations are susceptible to sampling errors, which degrade the accuracy and precision of observed values. The statistical convergence of simulations containing atomistic lipid bilayers is limited by the slow relaxation of the lipid phase, which can exceed hundreds of nanoseconds. These long conformational autocorrelation times are exacerbated in the presence of charged solutes, which can induce significant distortions of the bilayer structure. Such long relaxation times represent hidden barriers that induce systematic sampling errors in simulations of solute insertion. To identify optimal methods for enhancing sampling efficiency, we quantitatively evaluate convergence rates using generalized ensemble sampling algorithms in calculations of the potential of mean force for the insertion of the ionic side chain analog of arginine in a lipid bilayer. Umbrella sampling (US) is used to restrain solute insertion depth along the bilayer normal, the order parameter commonly used in simulations of molecular solutes in lipid bilayers. When US simulations are modified to conduct random walks along the bilayer normal using a Hamiltonian exchange algorithm, systematic sampling errors are eliminated more rapidly and the rate of statistical convergence of the standard free energy of binding of the solute to the lipid bilayer is increased 3-fold. We compute the ratio of the replica flux transmitted across a defined region of the order parameter to the replica flux that entered that region in Hamiltonian exchange simulations. We show that this quantity, the transmission factor, identifies sampling barriers in degrees of freedom orthogonal to the order parameter. The transmission factor is used to estimate the depth-dependent conformational autocorrelation times of the simulation system, some of which exceed the simulation time, and thereby identify solute insertion depths that are prone to systematic sampling errors and estimate the lower bound of the amount of sampling that is required to resolve these sampling errors. Finally, we extend our simulations and verify that the conformational autocorrelation times estimated by the transmission factor accurately predict correlation times that exceed the simulation time scale-something that, to our knowledge, has never before been achieved.
Error-free versus mutagenic processing of genomic uracil--relevance to cancer.
Krokan, Hans E; Sætrom, Pål; Aas, Per Arne; Pettersen, Henrik Sahlin; Kavli, Bodil; Slupphaug, Geir
2014-07-01
Genomic uracil is normally processed essentially error-free by base excision repair (BER), with mismatch repair (MMR) as an apparent backup for U:G mismatches. Nuclear uracil-DNA glycosylase UNG2 is the major enzyme initiating BER of uracil of U:A pairs as well as U:G mismatches. Deficiency in UNG2 results in several-fold increases in genomic uracil in mammalian cells. Thus, the alternative uracil-removing glycosylases, SMUG1, TDG and MBD4 cannot efficiently complement UNG2-deficiency. A major function of SMUG1 is probably to remove 5-hydroxymethyluracil from DNA with general back-up for UNG2 as a minor function. TDG and MBD4 remove deamination products U or T mismatched to G in CpG/mCpG contexts, but may have equally or more important functions in development, epigenetics and gene regulation. Genomic uracil was previously thought to arise only from spontaneous cytosine deamination and incorporation of dUMP, generating U:G mismatches and U:A pairs, respectively. However, the identification of activation-induced cytidine deaminase (AID) and other APOBEC family members as DNA-cytosine deaminases has spurred renewed interest in the processing of genomic uracil. Importantly, AID triggers the adaptive immune response involving error-prone processing of U:G mismatches, but also contributes to B-cell lymphomagenesis. Furthermore, mutational signatures in a substantial fraction of other human cancers are consistent with APOBEC-induced mutagenesis, with U:G mismatches as prime suspects. Mutations can be caused by replicative polymerases copying uracil in U:G mismatches, or by translesion polymerases that insert incorrect bases opposite abasic sites after uracil-removal. In addition, kataegis, localized hypermutations in one strand in the vicinity of genomic rearrangements, requires APOBEC protein, UNG2 and translesion polymerase REV1. What mechanisms govern error-free versus error prone processing of uracil in DNA remains unclear. In conclusion, genomic uracil is an essential intermediate in adaptive immunity and innate antiviral responses, but may also be a fundamental cause of a wide range of malignancies. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.
Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris
2016-04-21
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
Fleming, Kevin K; Bandy, Carole L; Kimble, Matthew O
2010-01-01
The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance.
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-01-01
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351