Sample records for error field tolerance

  1. Magnetic field errors tolerances of Nuclotron booster

    NASA Astrophysics Data System (ADS)

    Butenko, Andrey; Kazinova, Olha; Kostromin, Sergey; Mikhaylov, Vladimir; Tuzikov, Alexey; Khodzhibagiyan, Hamlet

    2018-04-01

    Generation of magnetic field in units of booster synchrotron for the NICA project is one of the most important conditions for getting the required parameters and qualitative accelerator operation. Research of linear and nonlinear dynamics of ion beam 197Au31+ in the booster have carried out with MADX program. Analytical estimation of magnetic field errors tolerance and numerical computation of dynamic aperture of booster DFO-magnetic lattice are presented. Closed orbit distortion with random errors of magnetic fields and errors in layout of booster units was evaluated.

  2. SU-E-T-484: In Vivo Dosimetry Tolerances in External Beam Fast Neutron Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, L; Gopan, O

    Purpose: Optical stimulated luminescence (OSL) dosimetry with Landauer Al2O3:C nanodots was developed at our institution as a passive in vivo dosimetry (IVD) system for patients treated with fast neutron therapy. The purpose of this study was to establish clinically relevant tolerance limits for detecting treatment errors requiring further investigation. Methods: Tolerance levels were estimated by conducting a series of IVD expected dose calculations for square field sizes ranging between 2.8 and 28.8 cm. For each field size evaluated, doses were calculated for open and internal wedged fields with angles of 30°, 45°, or 60°. Theoretical errors were computed for variationsmore » of incorrect beam configurations. Dose errors, defined as the percent difference from the expected dose calculation, were measured with groups of three nanodots placed in a 30 x 30 cm solid water phantom, at beam isocenter (150 cm SAD, 1.7 cm Dmax). The tolerances were applied to IVD patient measurements. Results: The overall accuracy of the nanodot measurements is 2–3% for open fields. Measurement errors agreed with calculated errors to within 3%. Theoretical estimates of dosimetric errors showed that IVD measurements with OSL nanodots will detect the absence of an internal wedge or a wrong wedge angle. Incorrect nanodot placement on a wedged field is more likely to be caught if the offset is in the direction of the “toe” of the wedge where the dose difference in percentage is about 12%. Errors caused by an incorrect flattening filter size produced a 2% measurement error that is not detectable by IVD measurement alone. Conclusion: IVD with nanodots will detect treatment errors associated with the incorrect implementation of the internal wedge. The results of this study will streamline the physicists’ investigations in determining the root cause of an IVD reading that is out of normally accepted tolerances.« less

  3. SU-D-201-01: A Multi-Institutional Study Quantifying the Impact of Simulated Linear Accelerator VMAT Errors for Nasopharynx

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogson, E; Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW; Ingham Institute for Applied Medical Research, Sydney, NSW

    Purpose: To quantify the impact of differing magnitudes of simulated linear accelerator errors on the dose to the target volume and organs at risk for nasopharynx VMAT. Methods: Ten nasopharynx cancer patients were retrospectively replanned twice with one full arc VMAT by two institutions. Treatment uncertainties (gantry angle and collimator in degrees, MLC field size and MLC shifts in mm) were introduced into these plans at increments of 5,2,1,−1,−2 and −5. This was completed using an in-house Python script within Pinnacle3 and analysed using 3DVH and MatLab. The mean and maximum dose were calculated for the Planning Target Volume (PTV1),more » parotids, brainstem, and spinal cord and then compared to the original baseline plan. The D1cc was also calculated for the spinal cord and brainstem. Patient average results were compared across institutions. Results: Introduced gantry angle errors had the smallest effect of dose, no tolerances were exceeded for one institution, and the second institutions VMAT plans were only exceeded for gantry angle of ±5° affecting different sided parotids by 14–18%. PTV1, brainstem and spinal cord tolerances were exceeded for collimator angles of ±5 degrees, MLC shifts and MLC field sizes of ±1 and beyond, at the first institution. At the second institution, sensitivity to errors was marginally higher for some errors including the collimator error producing doses exceeding tolerances above ±2 degrees, and marginally lower with tolerances exceeded above MLC shifts of ±2. The largest differences occur with MLC field sizes, with both institutions reporting exceeded tolerances, for all introduced errors (±1 and beyond). Conclusion: The plan robustness for VMAT nasopharynx plans has been demonstrated. Gantry errors have the least impact on patient doses, however MLC field sizes exceed tolerances even with relatively low introduced errors and also produce the largest errors. This was consistent across both departments. The authors acknowledge funding support from the NSW Cancer Council.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, C.J.; McVey, B.; Quimby, D.C.

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less

  5. Tolerance Studies of the Mu2e Solenoid System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopes, M. L.; Ambrosio, G.; Buehler, M.

    2014-01-01

    The muon-to-electron conversion experiment at Fermilab is designed to explore charged lepton flavor violation. It is composed of three large superconducting solenoids, namely, the production solenoid, the transport solenoid, and the detector solenoid. Each subsystem has a set of field requirements. Tolerance sensitivity studies of the magnet system were performed with the objective of demonstrating that the present magnet design meets all the field requirements. Systematic and random errors were considered on the position and alignment of the coils. The study helps to identify the critical sources of errors and which are translated to coil manufacturing and mechanical support tolerances.

  6. An SRRC elliptically polarizing undulator prototype to examine mechanical design feasibility and magnetic field performance.

    PubMed

    Chang, C H; Hwang, C S; Fan, T C; Chen, K H; Pan, K T; Lin, F Y; Wang, C; Chang, L H; Chen, H H; Lin, M C; Yeh, S

    1998-05-01

    In this work, a 1 m long Sasaki-type elliptically polarizing undulator (EPU) prototype with 5.6 cm period length is used to examine the mechanical design feasibility as well as magnetic field performance. The magnetic field characteristics of the EPU5.6 prototype at various phase shifts and gap motion are described. The field errors from mechanical tolerances, magnet block errors, end field effects and phase/gap motion effects are analysed. The procedures related to correcting the field with the block position tuning, iron shimming and the trim blocks at both ends are outlined.

  7. Application of Parallel Adjoint-Based Error Estimation and Anisotropic Grid Adaptation for Three-Dimensional Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.

    2005-01-01

    This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge in order to achieve the requested drag tolerance. Although further adaptation was required to meet the requested tolerance, no further cycles were computed in order to avoid large discrepancies between the surface mesh spacing and the refined field spacing.

  8. Field Quality from Tolerance Stack-up In R&D Quadrupoles for the Advanced Photon Source Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, J.; Jaski, M.; Dejus, R.

    2016-10-01

    The Advanced Photon Source (APS) at Argonne National Laboratory (ANL) is considering upgrading the current double-bend, 7-GeV, 3rd generation storage ring to a 6-GeV, 4th generation storage ring with a Multibend Achromat (MBA) lattice. In this study, a novel method is proposed to determine fabrication and assembly tolerances through a combination of magnetic and mechanical tolerance analyses. Mechanical tolerance stackup analyses using Teamcenter Variation Analysis are carried out to determine the part and assembly level fabrication tolerances. Finite element analyses using OPERA are conducted to estimate the effect of fabrication and assembly errors on the magnetic field of a quadrupolemore » magnet and to determine the allowable tolerances to achieve the desired magnetic performance. Finally, results of measurements in R&D quadrupole prototypes are compared with the analysis results.« less

  9. Multiple Embedded Processors for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  10. Combinatorial neural codes from a mathematical coding theory perspective.

    PubMed

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  11. Optical tolerances for the PICTURE-C mission: error budget for electric field conjugation, beam walk, surface scatter, and polarization aberration

    NASA Astrophysics Data System (ADS)

    Mendillo, Christopher B.; Howe, Glenn A.; Hewawasam, Kuravi; Martel, Jason; Finn, Susanna C.; Cook, Timothy A.; Chakrabarti, Supriya

    2017-09-01

    The Planetary Imaging Concept Testbed Using a Recoverable Experiment - Coronagraph (PICTURE-C) mission will directly image debris disks and exozodiacal dust around nearby stars from a high-altitude balloon using a vector vortex coronagraph. Four leakage sources owing to the optical fabrication tolerances and optical coatings are: electric field conjugation (EFC) residuals, beam walk on the secondary and tertiary mirrors, optical surface scattering, and polarization aberration. Simulations and analysis of these four leakage sources for the PICTUREC optical design are presented here.

  12. Physical fault tolerance of nanoelectronics.

    PubMed

    Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N

    2011-04-29

    The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.

  13. Metabolite and transcript markers for the prediction of potato drought tolerance.

    PubMed

    Sprenger, Heike; Erban, Alexander; Seddig, Sylvia; Rudack, Katharina; Thalhammer, Anja; Le, Mai Q; Walther, Dirk; Zuther, Ellen; Köhl, Karin I; Kopka, Joachim; Hincha, Dirk K

    2018-04-01

    Potato (Solanum tuberosum L.) is one of the most important food crops worldwide. Current potato varieties are highly susceptible to drought stress. In view of global climate change, selection of cultivars with improved drought tolerance and high yield potential is of paramount importance. Drought tolerance breeding of potato is currently based on direct selection according to yield and phenotypic traits and requires multiple trials under drought conditions. Marker-assisted selection (MAS) is cheaper, faster and reduces classification errors caused by noncontrolled environmental effects. We analysed 31 potato cultivars grown under optimal and reduced water supply in six independent field trials. Drought tolerance was determined as tuber starch yield. Leaf samples from young plants were screened for preselected transcript and nontargeted metabolite abundance using qRT-PCR and GC-MS profiling, respectively. Transcript marker candidates were selected from a published RNA-Seq data set. A Random Forest machine learning approach extracted metabolite and transcript markers for drought tolerance prediction with low error rates of 6% and 9%, respectively. Moreover, by combining transcript and metabolite markers, the prediction error was reduced to 4.3%. Feature selection from Random Forest models allowed model minimization, yielding a minimal combination of only 20 metabolite and transcript markers that were successfully tested for their reproducibility in 16 independent agronomic field trials. We demonstrate that a minimum combination of transcript and metabolite markers sampled at early cultivation stages predicts potato yield stability under drought largely independent of seasonal and regional agronomic conditions. © 2017 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  14. Final acceptance testing of the LSST monolithic primary/tertiary mirror

    NASA Astrophysics Data System (ADS)

    Tuell, Michael T.; Burge, James H.; Cuerden, Brian; Gressler, William; Martin, Hubert M.; West, Steven C.; Zhao, Chunyu

    2014-07-01

    The Large Synoptic Survey Telescope (LSST) is a three-mirror wide-field survey telescope with the primary and tertiary mirrors on one monolithic substrate1. This substrate is made of Ohara E6 borosilicate glass in a honeycomb sandwich, spin cast at the Steward Observatory Mirror Lab at The University of Arizona2. Each surface is aspheric, with the specification in terms of conic constant error, maximum active bending forces and finally a structure function specification on the residual errors3. There are high-order deformation terms, but with no tolerance, any error is considered as a surface error and is included in the structure function. The radii of curvature are very different, requiring two independent test stations, each with instantaneous phase-shifting interferometers with null correctors. The primary null corrector is a standard two-element Offner null lens. The tertiary null corrector is a phase-etched computer-generated hologram (CGH). This paper details the two optical systems and their tolerances, showing that the uncertainty in measuring the figure is a small fraction of the structure function specification. Additional metrology includes the radii of curvature, optical axis locations, and relative surface tilts. The methods for measuring these will also be described along with their tolerances.

  15. Fault-Tolerant Signal Processing Architectures with Distributed Error Control.

    DTIC Science & Technology

    1985-01-01

    Zm, Revisited," Information and Control, Vol. 37, pp. 100-104, 1978. 13. J. Wakerly , Error Detecting Codes. SeIf-Checkino Circuits and Applications ...However, the newer results concerning applications of real codes are still in the publication process. Hence, two very detailed appendices are included to...significant entities to be protected. While the distributed finite field approach afforded adequate protection, its applicability was restricted and

  16. Motion compensation and noise tolerance in phase-shifting digital in-line holography.

    PubMed

    Stenner, Michael D; Neifeld, Mark A

    2006-05-15

    We present a technique for phase-shifting digital in-line holography which compensates for lateral object motion. By collecting two frames of interference between object and reference fields with identical reference phase, one can estimate the lateral motion that occurred between frames using the cross-correlation. We also describe a very general linear framework for phase-shifting holographic reconstruction which minimizes additive white Gaussian noise (AWGN) for an arbitrary set of reference field amplitudes and phases. We analyze the technique's sensitivity to noise (AWGN, quantization, and shot), errors in the reference fields, errors in motion estimation, resolution, and depth of field. We also present experimental motion-compensated images achieving the expected resolution.

  17. Error monitoring issues for common channel signaling

    NASA Astrophysics Data System (ADS)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  18. Field Tolerances for the Triplet Quadrupoles of the LHC High Luminosity Lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nosochkov, Yuri; Cai, Y.; Jiao, Y.

    2012-06-25

    It has been proposed to implement the so-called Achromatic Telescopic Squeezing (ATS) scheme in the LHC high luminosity (HL) lattice to reduce beta functions at the Interaction Points (IP) up to a factor of 8. As a result, the nominal 4.5 km peak beta functions reached in the Inner Triplets (IT) at collision will be increased by the same factor. This, therefore, justifies the installation of new, larger aperture, superconducting IT quadrupoles. The higher beta functions will enhance the effects of the triplet quadrupole field errors leading to smaller beam dynamic aperture (DA). To maintain the acceptable DA, the effectsmore » of the triplet field errors must be re-evaluated, thus specifying new tolerances. Such a study has been performed for the so-called '4444' collision option of the HL-LHC layout version SLHCV3.01, where the IP beta functions are reduced by a factor of 4 in both planes with respect to a pre-squeezed value of 60 cm at two collision points. The dynamic aperture calculations were performed using SixTrack. The impact on the triplet field quality is presented.« less

  19. ESTIMATING SAMPLE REQUIREMENTS FOR FIELD EVALUATIONS OF PESTICIDE LEACHING

    EPA Science Inventory

    A method is presented for estimating the number of samples needed to evaluate pesticide leaching threats to ground water at a desired level of precision. Sample size projections are based on desired precision (exhibited as relative tolerable error), level of confidence (90 or 95%...

  20. Self-Nulling Beam Combiner Using No External Phase Inverter

    NASA Technical Reports Server (NTRS)

    Bloemhof, Eric E.

    2010-01-01

    A self-nulling beam combiner is proposed that completely eliminates the phase inversion subsystem from the nulling interferometer, and instead uses the intrinsic phase shifts in the beam splitters. Simplifying the flight instrument in this way will be a valuable enhancement of mission reliability. The tighter tolerances on R = T (R being reflection and T being transmission coefficients) required by the self-nulling configuration actually impose no new constraints on the architecture, as two adaptive nullers must be situated between beam splitters to correct small errors in the coatings. The new feature is exploiting the natural phase shifts in beam combiners to achieve the 180 phase inversion necessary for nulling. The advantage over prior art is that an entire subsystem, the field-flipping optics, can be eliminated. For ultimate simplicity in the flight instrument, one might fabricate coatings to very high tolerances and dispense with the adaptive nullers altogether, with all their moving parts, along with the field flipper subsystem. A single adaptive nuller upstream of the beam combiner may be required to correct beam train errors (systematic noise), but in some circumstances phase chopping reduces these errors substantially, and there may be ways to further reduce the chop residuals. Though such coatings are beyond the current state of the art, the mechanical simplicity and robustness of a flight system without field flipper or adaptive nullers would perhaps justify considerable effort on coating fabrication.

  1. Tolerance analyses of a quadrupole magnet for advanced photon source upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, J., E-mail: Jieliu@aps.anl.gov; Jaski, M., E-mail: jaski@aps.anl.gov; Borland, M., E-mail: borland@aps.anl.gov

    2016-07-27

    Given physics requirements, the mechanical fabrication and assembly tolerances for storage ring magnets can be calculated using analytical methods [1, 2]. However, this method is not easy for complicated magnet designs [1]. In this paper, a novel method is proposed to determine fabrication and assembly tolerances consistent with physics requirements, through a combination of magnetic and mechanical tolerance analyses. In this study, finite element analysis using OPERA is conducted to estimate the effect of fabrication and assembly errors on the magnetic field of a quadrupole magnet and to determine the allowable tolerances to achieve the specified magnetic performances. Based onmore » the study, allowable fabrication and assembly tolerances for the quadrupole assembly are specified for the mechanical design of the quadrupole magnet. Next, to achieve the required assembly level tolerances, mechanical tolerance stackup analyses using a 3D tolerance analysis package are carried out to determine the part and subassembly level fabrication tolerances. This method can be used to determine the tolerances for design of other individual magnets and of magnet strings.« less

  2. Altitude deviations: Breakdowns of an error-tolerant system

    NASA Technical Reports Server (NTRS)

    Palmer, Everett A.; Hutchins, Edwin L.; Ritter, Richard D.; Vancleemput, Inge

    1993-01-01

    Pilot reports of aviation incidents to the Aviation Safety Reporting System (ASRS) provide a window on the problems occurring in today's airline cockpits. The narratives of 10 pilot reports of errors made in the automation-assisted altitude-change task are used to illustrate some of the issues of pilots interacting with automatic systems. These narratives are then used to construct a description of the cockpit as an information processing system. The analysis concentrates on the error-tolerant properties of the system and on how breakdowns can occasionally occur. An error-tolerant system can detect and correct its internal processing errors. The cockpit system consists of two or three pilots supported by autoflight, flight-management, and alerting systems. These humans and machines have distributed access to clearance information and perform redundant processing of information. Errors can be detected as deviations from either expected behavior or as deviations from expected information. Breakdowns in this system can occur when the checking and cross-checking tasks that give the system its error-tolerant properties are not performed because of distractions or other task demands. Recommendations based on the analysis for improving the error tolerance of the cockpit system are given.

  3. FPGA-Based, Self-Checking, Fault-Tolerant Computers

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Rennels, David

    2004-01-01

    A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.

  4. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.

  5. Error and attack tolerance of complex networks

    NASA Astrophysics Data System (ADS)

    Albert, Réka; Jeong, Hawoong; Barabási, Albert-László

    2000-07-01

    Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network. Complex communication networks display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web, the Internet, social networks and cells. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.

  6. Large field distributed aperture laser semiactive angle measurement system design with imaging fiber bundles.

    PubMed

    Xu, Chunyun; Cheng, Haobo; Feng, Yunpeng; Jing, Xiaoli

    2016-09-01

    A type of laser semiactive angle measurement system is designed for target detecting and tracking. Only one detector is used to detect target location from four distributed aperture optical systems through a 4×1 imaging fiber bundle. A telecentric optical system in image space is designed to increase the efficiency of imaging fiber bundles. According to the working principle of a four-quadrant (4Q) detector, fiber diamond alignment is adopted between an optical system and a 4Q detector. The structure of the laser semiactive angle measurement system is, we believe, novel. Tolerance analysis is carried out to determine tolerance limits of manufacture and installation errors of the optical system. The performance of the proposed method is identified by computer simulations and experiments. It is demonstrated that the linear region of the system is ±12°, with measurement error of better than 0.2°. In general, this new system can be used with large field of view and high accuracy, providing an efficient, stable, and fast method for angle measurement in practical situations.

  7. TED: A Tolerant Edit Distance for segmentation evaluation.

    PubMed

    Funke, Jan; Klein, Jonas; Moreno-Noguer, Francesc; Cardona, Albert; Cook, Matthew

    2017-02-15

    In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods. Copyright © 2016. Published by Elsevier Inc.

  8. Analysis on optical heterodyne frequency error of full-field heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli

    2017-06-01

    The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.

  9. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  10. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  11. Data and results of a laboratory investigation of microprocessor upset caused by simulated lightning-induced analog transients

    NASA Technical Reports Server (NTRS)

    Belcastro, C. M.

    1984-01-01

    Advanced composite aircraft designs include fault-tolerant computer-based digital control systems with thigh reliability requirements for adverse as well as optimum operating environments. Since aircraft penetrate intense electromagnetic fields during thunderstorms, onboard computer systems maya be subjected to field-induced transient voltages and currents resulting in functional error modes which are collectively referred to as digital system upset. A methodology was developed for assessing the upset susceptibility of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general-purpose microprocessor were studied via tests which involved the random input of analog transients which model lightning-induced signals onto interface lines of an 8080-based microcomputer from which upset error data were recorded. The application of Markov modeling to upset susceptibility estimation is discussed and a stochastic model development.

  12. Measurement and analysis of operating system fault tolerance

    NASA Technical Reports Server (NTRS)

    Lee, I.; Tang, D.; Iyer, R. K.

    1992-01-01

    This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.

  13. [The medicine use pathway in paediatrics].

    PubMed

    Didelot, Nicolas

    2016-01-01

    The medicine use pathway is a process which is constantly evolving in order to comply with intangible rules. As in other therapeutic fields, the drug regimen in paediatrics must tolerate no error and must be able to detect all warning signs, however minor, in order to optimise this approach. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  14. Design of a fiber-optic multiphoton microscopy handheld probe

    PubMed Central

    Zhao, Yuan; Sheng, Mingyu; Huang, Lin; Tang, Shuo

    2016-01-01

    We have developed a fiber-optic multiphoton microscopy (MPM) system with handheld probe using femtosecond fiber laser. Here we present the detailed optical design and analysis of the handheld probe. The optical systems using Lightpath 352140 and 352150 as objective lens were analyzed. A custom objective module that includes Lightpath 355392 and two customized corrective lenses was designed. Their performances were compared by wavefront error, field curvature, astigmatism, F-θ error, and tolerance in Zemax simulation. Tolerance analysis predicted the focal spot size to be 1.13, 1.19 and 0.83 µm, respectively. Lightpath 352140 and 352150 were implemented in experiment and the measured lateral resolution was 1.22 and 1.3 µm, respectively, which matched with the prediction. MPM imaging by the handheld probe were conducted on leaf, fish scale and rat tail tendon. The MPM resolution can potentially be improved by the custom objective module. PMID:27699109

  15. Design of a fiber-optic multiphoton microscopy handheld probe.

    PubMed

    Zhao, Yuan; Sheng, Mingyu; Huang, Lin; Tang, Shuo

    2016-09-01

    We have developed a fiber-optic multiphoton microscopy (MPM) system with handheld probe using femtosecond fiber laser. Here we present the detailed optical design and analysis of the handheld probe. The optical systems using Lightpath 352140 and 352150 as objective lens were analyzed. A custom objective module that includes Lightpath 355392 and two customized corrective lenses was designed. Their performances were compared by wavefront error, field curvature, astigmatism, F-θ error, and tolerance in Zemax simulation. Tolerance analysis predicted the focal spot size to be 1.13, 1.19 and 0.83 µm, respectively. Lightpath 352140 and 352150 were implemented in experiment and the measured lateral resolution was 1.22 and 1.3 µm, respectively, which matched with the prediction. MPM imaging by the handheld probe were conducted on leaf, fish scale and rat tail tendon. The MPM resolution can potentially be improved by the custom objective module.

  16. Fault Tolerance Middleware for a Multi-Core System

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.

    2012-01-01

    Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the introspection module that it can use in generating its next response. For example, if the responder triggers an application rollback and errors are still occurring, the introspection module may decide to recommend an application restart.

  17. Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.

    PubMed

    Menicucci, Nicolas C

    2014-03-28

    A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.

  18. Loss Tolerance in One-Way Quantum Computation via Counterfactual Error Correction

    NASA Astrophysics Data System (ADS)

    Varnava, Michael; Browne, Daniel E.; Rudolph, Terry

    2006-09-01

    We introduce a scheme for fault tolerantly dealing with losses (or other “leakage” errors) in cluster state computation that can tolerate up to 50% qubit loss. This is achieved passively using an adaptive strategy of measurement—no coherent measurements or coherent correction is required. Since the scheme relies on inferring information about what would have been the outcome of a measurement had one been able to carry it out, we call this counterfactual error correction.

  19. The Essential Component in DNA-Based Information Storage System: Robust Error-Tolerating Module

    PubMed Central

    Yim, Aldrin Kay-Yuen; Yu, Allen Chi-Shing; Li, Jing-Woei; Wong, Ada In-Chun; Loo, Jacky F. C.; Chan, King Ming; Kong, S. K.; Yip, Kevin Y.; Chan, Ting-Fung

    2014-01-01

    The size of digital data is ever increasing and is expected to grow to 40,000 EB by 2020, yet the estimated global information storage capacity in 2011 is <300 EB, indicating that most of the data are transient. DNA, as a very stable nano-molecule, is an ideal massive storage device for long-term data archive. The two most notable illustrations are from Church et al. and Goldman et al., whose approaches are well-optimized for most sequencing platforms – short synthesized DNA fragments without homopolymer. Here, we suggested improvements on error handling methodology that could enable the integration of DNA-based computational process, e.g., algorithms based on self-assembly of DNA. As a proof of concept, a picture of size 438 bytes was encoded to DNA with low-density parity-check error-correction code. We salvaged a significant portion of sequencing reads with mutations generated during DNA synthesis and sequencing and successfully reconstructed the entire picture. A modular-based programing framework – DNAcodec with an eXtensible Markup Language-based data format was also introduced. Our experiments demonstrated the practicability of long DNA message recovery with high error tolerance, which opens the field to biocomputing and synthetic biology. PMID:25414846

  20. Tolerance analysis of optical telescopes using coherent addition of wavefront errors

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.

  1. Tolerance and UQ4SIM: Nimble Uncertainty Documentation and Analysis Software

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2008-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and variabilities is a necessary first step toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. The basic premise of uncertainty markup is to craft a tolerance and tagging mini-language that offers a natural, unobtrusive presentation and does not depend on parsing each type of input file format. Each file is marked up with tolerances and optionally, associated tags that serve to label the parameters and their uncertainties. The evolution of such a language, often called a Domain Specific Language or DSL, is given in [1], but in final form it parallels tolerances specified on an engineering drawing, e.g., 1 +/- 0.5, 5 +/- 10%, 2 +/- 10 where % signifies percent and o signifies order of magnitude. Tags, necessary for error propagation, can be added by placing a quotation-mark-delimited tag after the tolerance, e.g., 0.7 +/- 20% 'T_effective'. In addition, tolerances might have different underlying distributions, e.g., Uniform, Normal, or Triangular, or the tolerances may merely be intervals due to lack of knowledge (uncertainty). Finally, to address pragmatic considerations such as older models that require specific number-field formats, C-style format specifiers can be appended to the tolerance like so, 1.35 +/- 10U_3.2f. As an example of use, consider figure 1, where a chemical reaction input file is has been marked up to include tolerances and tags per table 1. Not only does the technique provide a natural method of specifying tolerances, but it also servers as in situ documentation of model uncertainties. This tolerance language comes with a utility to strip the tolerances (and tags), to provide a path to the nominal model parameter file. And, as shown in [1], having the ability to quickly mark and identify model parameter uncertainties facilitates error propagation, which in turn yield output uncertainties.

  2. Radiation-Tolerant Intelligent Memory Stack - RTIMS

    NASA Technical Reports Server (NTRS)

    Ng, Tak-kwong; Herath, Jeffrey A.

    2011-01-01

    This innovation provides reconfigurable circuitry and 2-Gb of error-corrected or 1-Gb of triple-redundant digital memory in a small package. RTIMS uses circuit stacking of heterogeneous components and radiation shielding technologies. A reprogrammable field-programmable gate array (FPGA), six synchronous dynamic random access memories, linear regulator, and the radiation mitigation circuits are stacked into a module of 42.7 42.7 13 mm. Triple module redundancy, current limiting, configuration scrubbing, and single- event function interrupt detection are employed to mitigate radiation effects. The novel self-scrubbing and single event functional interrupt (SEFI) detection allows a relatively soft FPGA to become radiation tolerant without external scrubbing and monitoring hardware

  3. General linear codes for fault-tolerant matrix operations on processor arrays

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1988-01-01

    Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.

  4. Method and apparatus for faulty memory utilization

    DOEpatents

    Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2016-04-19

    A method for faulty memory utilization in a memory system includes: obtaining information regarding memory health status of at least one memory page in the memory system; determining an error tolerance of the memory page when the information regarding memory health status indicates that a failure is predicted to occur in an area of the memory system affecting the memory page; initiating a migration of data stored in the memory page when it is determined that the data stored in the memory page is non-error-tolerant; notifying at least one application regarding a predicted operating system failure and/or a predicted application failure when it is determined that data stored in the memory page is non-error-tolerant and cannot be migrated; and notifying at least one application regarding the memory failure predicted to occur when it is determined that data stored in the memory page is error-tolerant.

  5. Evaluation of a new photomask CD metrology tool

    NASA Astrophysics Data System (ADS)

    Dubuque, Leonard F.; Doe, Nicholas G.; St. Cin, Patrick

    1996-12-01

    In the integrated circuit (IC) photomask industry today, dense IC patterns, sub-micron critical dimensions (CD), and narrow tolerances for 64 M technologies and beyond are driving increased demands to minimize and characterize all components of photomask CD variation. This places strict requirements on photomask CD metrology in order to accurately characterize the mask CD error distribution. According to the gauge-maker's rule, measurement error must not exceed 30% of the tolerance on the product dimension measured or the gauge is not considered capable. The traditional single point repeatability tests are a poor measure of overall measurement system error in a dynamic, leading-edge technology environment. In such an environment, measurements may be taken at different points in the field- of-view due to stage in-accuracy, pattern recognition requirements, and throughput considerations. With this in mind, a set of experiments were designed to characterize thoroughly the metrology tool's repeatability and systematic error. Original experiments provided inconclusive results and had to be extended to obtain a full characterization of the system. Tests demonstrated a performance of better than 15 nm total CD error. Using this test as a tool for further development, the authors were able to determine the effects of various system components and measure the improvement with changes in optics, electronics, and software. Optimization of the optical path, electronics, and system software has yielded a new instrument with a total system error of better than 8 nm. Good collaboration between the photomask manufacturer and the equipment supplier has led to a realistic test of system performance and an improved CD measurement instrument.

  6. Checking Questionable Entry of Personally Identifiable Information Encrypted by One-Way Hash Transformation

    PubMed Central

    Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David

    2017-01-01

    Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. Conclusions The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. PMID:28213343

  7. Checking Questionable Entry of Personally Identifiable Information Encrypted by One-Way Hash Transformation.

    PubMed

    Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David; Yang, Rong

    2017-02-17

    As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. ©Xianlai Chen, Yang C Fann, Matthew McAuliffe, David Vismer, Rong Yang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 17.02.2017.

  8. Fault Tolerance for VLSI Multicomputers

    DTIC Science & Technology

    1985-08-01

    that consists of hundreds or thousands of VLSI computation nodes interconnected by dedicated links. Some important applications of high-end computers...technology, and intended applications . A proposed fault tolerance scheme combines hardware that performs error detection and system-level protocols for...order to recover from the error and resume correct operation, a valid system state must be restored. A low-overhead, application -transparent error

  9. Step-by-step magic state encoding for efficient fault-tolerant quantum computation

    PubMed Central

    Goto, Hayato

    2014-01-01

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387

  10. Step-by-step magic state encoding for efficient fault-tolerant quantum computation.

    PubMed

    Goto, Hayato

    2014-12-16

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  11. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  12. Overview of error-tolerant cockpit research

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy

    1990-01-01

    The objectives of research in intelligent cockpit aids and intelligent error-tolerant systems are stated. In intelligent cockpit aids research, the objective is to provide increased aid and support to the flight crew of civil transport aircraft through the use of artificial intelligence techniques combined with traditional automation. In intelligent error-tolerant systems, the objective is to develop and evaluate cockpit systems that provide flight crews with safe and effective ways and means to manage aircraft systems, plan and replan flights, and respond to contingencies. A subsystems fault management functional diagram is given. All information is in viewgraph form.

  13. Simulation of aspheric tolerance with polynomial fitting

    NASA Astrophysics Data System (ADS)

    Li, Jing; Cen, Zhaofeng; Li, Xiaotong

    2018-01-01

    The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.

  14. MO-FG-202-01: A Fast Yet Sensitive EPID-Based Real-Time Treatment Verification System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmad, M; Nourzadeh, H; Neal, B

    2016-06-15

    Purpose: To create a real-time EPID-based treatment verification system which robustly detects treatment delivery and patient attenuation variations. Methods: Treatment plan DICOM files sent to the record-and-verify system are captured and utilized to predict EPID images for each planned control point using a modified GPU-based digitally reconstructed radiograph algorithm which accounts for the patient attenuation, source energy fluence, source size effects, and MLC attenuation. The DICOM and predicted images are utilized by our C++ treatment verification software which compares EPID acquired 1024×768 resolution frames acquired at ∼8.5hz from Varian Truebeam™ system. To maximize detection sensitivity, image comparisons determine (1) ifmore » radiation exists outside of the desired treatment field; (2) if radiation is lacking inside the treatment field; (3) if translations, rotations, and magnifications of the image are within tolerance. Acquisition was tested with known test fields and prior patient fields. Error detection was tested in real-time and utilizing images acquired during treatment with another system. Results: The computational time of the prediction algorithms, for a patient plan with 350 control points and 60×60×42cm^3 CT volume, is 2–3minutes on CPU and <27 seconds on GPU for 1024×768 images. The verification software requires a maximum of ∼9ms and ∼19ms for 512×384 and 1024×768 resolution images, respectively, to perform image analysis and dosimetric validations. Typical variations in geometric parameters between reference and the measured images are 0.32°for gantry rotation, 1.006 for scaling factor, and 0.67mm for translation. For excess out-of-field/missing in-field fluence, with masks extending 1mm (at isocenter) from the detected aperture edge, the average total in-field area missing EPID fluence was 1.5mm2 the out-of-field excess EPID fluence was 8mm^2, both below error tolerances. Conclusion: A real-time verification software, with EPID images prediction algorithm, was developed. The system is capable of performing verifications between frames acquisitions and identifying source(s) of any out-of-tolerance variations. This work was supported in part by Varian Medical Systems.« less

  15. A methodology for testing fault-tolerant software

    NASA Technical Reports Server (NTRS)

    Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.

    1985-01-01

    A methodology for testing fault tolerant software is presented. There are problems associated with testing fault tolerant software because many errors are masked or corrected by voters, limiter, or automatic channel synchronization. This methodology illustrates how the same strategies used for testing fault tolerant hardware can be applied to testing fault tolerant software. For example, one strategy used in testing fault tolerant hardware is to disable the redundancy during testing. A similar testing strategy is proposed for software, namely, to move the major emphasis on testing earlier in the development cycle (before the redundancy is in place) thus reducing the possibility that undetected errors will be masked when limiters and voters are added.

  16. Characterization of the International Linear Collider damping ring optics

    NASA Astrophysics Data System (ADS)

    Shanks, J.; Rubin, D. L.; Sagan, D.

    2014-10-01

    A method is presented for characterizing the emittance dilution and dynamic aperture for an arbitrary closed lattice that includes guide field magnet errors, multipole errors and misalignments. This method, developed and tested at the Cornell Electron Storage Ring Test Accelerator (CesrTA), has been applied to the damping ring lattice for the International Linear Collider (ILC). The effectiveness of beam based emittance tuning is limited by beam position monitor (BPM) measurement errors, number of corrector magnets and their placement, and correction algorithm. The specifications for damping ring magnet alignment, multipole errors, number of BPMs, and precision in BPM measurements are shown to be consistent with the required emittances and dynamic aperture. The methodology is then used to determine the minimum number of position monitors that is required to achieve the emittance targets, and how that minimum depends on the location of the BPMs. Similarly, the maximum tolerable multipole errors are evaluated. Finally, the robustness of each BPM configuration with respect to random failures is explored.

  17. Fault-tolerant quantum error detection.

    PubMed

    Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher

    2017-10-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.

  18. A framework for software fault tolerance in real-time systems

    NASA Technical Reports Server (NTRS)

    Anderson, T.; Knight, J. C.

    1983-01-01

    A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.

  19. A new measurement method of profile tolerance for the LAMOST focal plane

    NASA Astrophysics Data System (ADS)

    Zhou, Zengxiang; Jin, Yi; Zhai, Chao; Xing, Xiaozheng

    2008-07-01

    There were a few methods taken in the profile tolerance measurement of the LAMOST Focal Plane Plate. One of the methods was to use CMM (Coordinate Measurement Machine) to measure the points on the small Focal Plane Plate and calculate the points whether or not in the tolerance zone. In this process there are some small shortcomings. The measuring point positions on the Focal Plane Plate are not the actual installation location of the optical fiber positioning system. In order to eliminate these principle errors, a measuring mandrel is inserted into the unit-holes, and the precision for the mandrel with the hole is controlled in the high level. Then measure the center of the precise target ball which is placed on the measuring mandrel by CMM. At last, fit a sphere surface with the measuring center points of the target ball and analyze the profile tolerance of the Focal Plane Plate. This process will be more in line with the actual installation location of the optical fiber positioning system. When use this method to judge the profile tolerance can provide the reference date for maintaining the ultra error unit-holes on the Focal Plane Plate. But when insert the measuring mandrel into the unit hole, there are manufacturing errors in the measuring mandrel, target ball and assembly errors. All these errors will bring the influence in the measurement. In the paper, an impact evaluation assesses the intermediate process with all these errors through experiments. And the experiment results show that there are little influence when use the target ball and the measuring mandrel in the measurement of the profile tolerance. Instead, there are more advantages than many past use of measuring methods.

  20. A Swiss cheese error detection method for real-time EPID-based quality assurance and error prevention.

    PubMed

    Passarge, Michelle; Fix, Michael K; Manser, Peter; Stampanoni, Marco F M; Siebers, Jeffrey V

    2017-04-01

    To develop a robust and efficient process that detects relevant dose errors (dose errors of ≥5%) in external beam radiation therapy and directly indicates the origin of the error. The process is illustrated in the context of electronic portal imaging device (EPID)-based angle-resolved volumetric-modulated arc therapy (VMAT) quality assurance (QA), particularly as would be implemented in a real-time monitoring program. A Swiss cheese error detection (SCED) method was created as a paradigm for a cine EPID-based during-treatment QA. For VMAT, the method compares a treatment plan-based reference set of EPID images with images acquired over each 2° gantry angle interval. The process utilizes a sequence of independent consecutively executed error detection tests: an aperture check that verifies in-field radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment check to examine if rotation, scaling, and translation are within tolerances; pixel intensity check containing the standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each check were determined. To test the SCED method, 12 different types of errors were selected to modify the original plan. A series of angle-resolved predicted EPID images were artificially generated for each test case, resulting in a sequence of precalculated frames for each modified treatment plan. The SCED method was applied multiple times for each test case to assess the ability to detect introduced plan variations. To compare the performance of the SCED process with that of a standard gamma analysis, both error detection methods were applied to the generated test cases with realistic noise variations. Averaged over ten test runs, 95.1% of all plan variations that resulted in relevant patient dose errors were detected within 2° and 100% within 14° (<4% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 89.1% were detected by the SCED method within 2°. Based on the type of check that detected the error, determination of error sources was achieved. With noise ranging from no random noise to four times the established noise value, the averaged relevant dose error detection rate of the SCED method was between 94.0% and 95.8% and that of gamma between 82.8% and 89.8%. An EPID-frame-based error detection process for VMAT deliveries was successfully designed and tested via simulations. The SCED method was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of relevant dose errors. Compared to a typical (3%, 3 mm) gamma analysis, the SCED method produced a higher detection rate for all introduced dose errors, identified errors in an earlier stage, displayed a higher robustness to noise variations, and indicated the error source. © 2017 American Association of Physicists in Medicine.

  1. Error rates and resource overheads of encoded three-qubit gates

    NASA Astrophysics Data System (ADS)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  2. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems.

    PubMed

    Li, Ying

    2016-09-16

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  3. Fault-tolerant, high-level quantum circuits: form, compilation and description

    NASA Astrophysics Data System (ADS)

    Paler, Alexandru; Polian, Ilia; Nemoto, Kae; Devitt, Simon J.

    2017-06-01

    Fault-tolerant quantum error correction is a necessity for any quantum architecture destined to tackle interesting, large-scale problems. Its theoretical formalism has been well founded for nearly two decades. However, we still do not have an appropriate compiler to produce a fault-tolerant, error-corrected description from a higher-level quantum circuit for state-of the-art hardware models. There are many technical hurdles, including dynamic circuit constructions that occur when constructing fault-tolerant circuits with commonly used error correcting codes. We introduce a package that converts high-level quantum circuits consisting of commonly used gates into a form employing all decompositions and ancillary protocols needed for fault-tolerant error correction. We call this form the (I)initialisation, (C)NOT, (M)measurement form (ICM) and consists of an initialisation layer of qubits into one of four distinct states, a massive, deterministic array of CNOT operations and a series of time-ordered X- or Z-basis measurements. The form allows a more flexible approach towards circuit optimisation. At the same time, the package outputs a standard circuit or a canonical geometric description which is a necessity for operating current state-of-the-art hardware architectures using topological quantum codes.

  4. Fault-tolerant quantum error detection

    PubMed Central

    Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher

    2017-01-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889

  5. Analysis of a hardware and software fault tolerant processor for critical applications

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne B.

    1993-01-01

    Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.

  6. Microwave power transmission system studies. Volume 2: Introduction, organization, environmental and spaceborne systems analyses

    NASA Technical Reports Server (NTRS)

    Maynard, O. E.; Brown, W. C.; Edwards, A.; Haley, J. T.; Meltz, G.; Howell, J. M.; Nathan, A.

    1975-01-01

    Introduction, organization, analyses, conclusions, and recommendations for each of the spaceborne subsystems are presented. Environmental effects - propagation analyses are presented with appendices covering radio wave diffraction by random ionospheric irregularities, self-focusing plasma instabilities and ohmic heating of the D-region. Analyses of dc to rf conversion subsystems and system considerations for both the amplitron and the klystron are included with appendices for the klystron covering cavity circuit calculations, output power of the solenoid-focused klystron, thermal control system, and confined flow focusing of a relativistic beam. The photovoltaic power source characteristics are discussed as they apply to interfacing with the power distribution flow paths, magnetic field interaction, dc to rf converter protection, power distribution including estimates for the power budget, weights, and costs. Analyses for the transmitting antenna consider the aperture illumination and size, with associated efficiencies and ground power distributions. Analyses of subarray types and dimensions, attitude error, flatness, phase error, subarray layout, frequency tolerance, attenuation, waveguide dimensional tolerances, mechanical including thermal considerations are included. Implications associated with transportation, assembly and packaging, attitude control and alignment are discussed. The phase front control subsystem, including both ground based pilot signal driven adaptive and ground command approaches with their associated phase errors, are analyzed.

  7. Imperfect construction of microclusters

    NASA Astrophysics Data System (ADS)

    Schneider, E.; Zhou, K.; Gilbert, G.; Weinstein, Y. S.

    2014-01-01

    Microclusters are the basic building blocks used to construct cluster states capable of supporting fault-tolerant quantum computation. In this paper, we explore the consequences of errors on microcluster construction using two error models. To quantify the effect of the errors we calculate the fidelity of the constructed microclusters and the fidelity with which two such microclusters can be fused together. Such simulations are vital for gauging the capability of an experimental system to achieve fault tolerance.

  8. Imputing Risk Tolerance From Survey Responses

    PubMed Central

    Kimball, Miles S.; Sahm, Claudia R.; Shapiro, Matthew D.

    2010-01-01

    Economic theory assigns a central role to risk preferences. This article develops a measure of relative risk tolerance using responses to hypothetical income gambles in the Health and Retirement Study. In contrast to most survey measures that produce an ordinal metric, this article shows how to construct a cardinal proxy for the risk tolerance of each survey respondent. The article also shows how to account for measurement error in estimating this proxy and how to obtain consistent regression estimates despite the measurement error. The risk tolerance proxy is shown to explain differences in asset allocation across households. PMID:20407599

  9. Propagation of angular errors in two-axis rotation systems

    NASA Astrophysics Data System (ADS)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  10. Topographical optimization of structures for use in musical instruments and other applications

    NASA Astrophysics Data System (ADS)

    Kirkland, William Brandon

    Mallet percussion instruments such as the xylophone, marimba, and vibraphone have been produced and tuned since their inception by arduously grinding the keys to achieve harmonic ratios between their 1st, 2 nd, and 3rd transverse modes. In consideration of this, it would be preferable to have defined mathematical models such that the keys of these instruments can be produced quickly and reliably. Additionally, physical modeling of these keys or beams provides a useful application of non-uniform beam vibrations as studied by Euler-Bernoulli and Timoshenko beam theories. This thesis work presents a literature review of previous studies regarding mallet percussion instrument design and optimization of non-uniform keys. The progression of previous research from strictly mathematical approaches to finite element methods is shown, ultimately arriving at the most current optimization techniques used by other authors. However, previous research varies slightly in the relative degree of accuracy to which a non-uniform beam can be modeled. Typically, accuracies are shown in literature as 1% to 2% error. While this seems attractive, musical tolerances require 0.25% error and beams are otherwise unsuitable. This research seeks to build on and add to the previous field research by optimizing beam topology and machining keys within tolerances that no further tuning is required. The optimization methods relied on finite element analysis and used harmonic modal frequencies as constraints rather than arguments of an error function to be optimized. Instead, the beam mass was minimized while the modal frequency constraints were required to be satisfied within 0.25% tolerance. The final optimized and machined keys of an A4 vibraphone were shown to be accurate within the required musical tolerances, with strong resonance at the designed frequencies. The findings solidify a systematic method for designing musical structures for accuracy and repeatability upon manufacture.

  11. Analysis of field errors for LARP Nb 3Sn HQ03 quadrupole magnet

    DOE PAGES

    Wang, Xiaorong; Ambrosio, Giorgio; Chlachidze, Guram; ...

    2016-12-01

    The U.S. LHC Accelerator Research Program, in close collaboration with CERN, has developed three generations of high-gradient quadrupole (HQ) Nb 3Sn model magnets, to support the development of the 150 mm aperture Nb 3Sn quadrupole magnets for the High-Luminosity LHC. The latest generation, HQ03, featured coils with better uniformity of coil dimensions and properties than the earlier generations. We tested the HQ03 magnet at FNAL, including the field quality study. The profiles of low-order harmonics along the magnet aperture observed at 15 kA, 1.9 K can be traced back to the assembled coil pack before the magnet assembly. Based onmore » the measured harmonics in the magnet center region, the coil block positioning tolerance was analyzed and compared with earlier HQ01 and HQ02 magnets to correlate with coil and magnet fabrication. Our study the capability of correcting the low-order non-allowed field errors, magnetic shims were installed in HQ03. Furthermore, the expected shim contribution agreed well with the calculation. For the persistent-current effect, the measured a4 can be related to 4% higher in the strand magnetization of one coil with respect to the other three coils. Lastly, we compare the field errors due to the inter-strand coupling currents between HQ03 and HQ02.« less

  12. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  13. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boehnke, E McKenzie; DeMarco, J; Steers, J

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readingsmore » are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.« less

  14. On the representation and estimation of spatial uncertainty. [for mobile robot

    NASA Technical Reports Server (NTRS)

    Smith, Randall C.; Cheeseman, Peter

    1987-01-01

    This paper describes a general method for estimating the nominal relationship and expected error (covariance) between coordinate frames representing the relative locations of objects. The frames may be known only indirectly through a series of spatial relationships, each with its associated error, arising from diverse causes, including positioning errors, measurement errors, or tolerances in part dimensions. This estimation method can be used to answer such questions as whether a camera attached to a robot is likely to have a particular reference object in its field of view. The calculated estimates agree well with those from an independent Monte Carlo simulation. The method makes it possible to decide in advance whether an uncertain relationship is known accurately enough for some task and, if not, how much of an improvement in locational knowledge a proposed sensor will provide. The method presented can be generalized to six degrees of freedom and provides a practical means of estimating the relationships (position and orientation) among objects, as well as estimating the uncertainty associated with the relationships.

  15. Assessing tropical rainforest growth traits: Data - Model fusion in the Congo basin and beyond

    NASA Astrophysics Data System (ADS)

    Pietsch, Stephan

    2017-04-01

    Virgin forest ecosystems resemble the key reference level for natural tree growth dynamics. The mosaic cycle concept describes such dynamics as local disequilibria driven by patch level succession cycles of breakdown, regeneration, juvenescence and old growth. These cycles, however, may involve different traits of light demanding and shade tolerant species assemblies. In this work a data model fusion concept will be introduced to assess the differences in growth dynamics of the mosaic cycle of the Western Congolian Lowland Rainforest ecosystem. Field data from 34 forest patches located in an ice age forest refuge, recently pinpointed to the ground and still devoid of direct human impact up to today - resemble the data base. A 3D error assessment procedure versus BGC model simulations for the 34 patches revealed two different growth dynamics, consistent with observed growth traits of pioneer and late succession species assemblies of the Western Congolian Lowland rainforest. An application of the same procedure to Central American Pacific rainforests confirms the strength of the 3D error field data model fusion concept to Central American Pacific rainforests confirms the strength of the 3D error field data model fusion concept to assess different growth traits of the mosaic cycle of natural forest dynamics.

  16. Fault tolerant architectures for integrated aircraft electronics systems

    NASA Technical Reports Server (NTRS)

    Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.

    1983-01-01

    Work into possible architectures for future flight control computer systems is described. Ada for Fault-Tolerant Systems, the NETS Network Error-Tolerant System architecture, and voting in asynchronous systems are covered.

  17. Software fault tolerance for real-time avionics systems

    NASA Technical Reports Server (NTRS)

    Anderson, T.; Knight, J. C.

    1983-01-01

    Avionics systems have very high reliability requirements and are therefore prime candidates for the inclusion of fault tolerance techniques. In order to provide tolerance to software faults, some form of state restoration is usually advocated as a means of recovery. State restoration can be very expensive for systems which utilize concurrent processes. The concurrency present in most avionics systems and the further difficulties introduced by timing constraints imply that providing tolerance for software faults may be inordinately expensive or complex. A straightforward pragmatic approach to software fault tolerance which is believed to be applicable to many real-time avionics systems is proposed. A classification system for software errors is presented together with approaches to recovery and continued service for each error type.

  18. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    NASA Astrophysics Data System (ADS)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with the code Gauss2 [S. Gonzalez-Pinto, R. Rojas-Bello, Gauss2, a Fortran 90 code for second order initial value problems, ], based on an adaptive two stage Runge-Kutta-Gauss method with this discontinuous step-size policy.

  19. Fault tolerance with noisy and slow measurements and preparation.

    PubMed

    Paz-Silva, Gerardo A; Brennen, Gavin K; Twamley, Jason

    2010-09-03

    It is not so well known that measurement-free quantum error correction protocols can be designed to achieve fault-tolerant quantum computing. Despite their potential advantages in terms of the relaxation of accuracy, speed, and addressing requirements, they have usually been overlooked since they are expected to yield a very bad threshold. We show that this is not the case. We design fault-tolerant circuits for the 9-qubit Bacon-Shor code and find an error threshold for unitary gates and preparation of p((p,g)thresh)=3.76×10(-5) (30% of the best known result for the same code using measurement) while admitting up to 1/3 error rates for measurements and allocating no constraints on measurement speed. We further show that demanding gate error rates sufficiently below the threshold pushes the preparation threshold up to p((p)thresh)=1/3.

  20. Healing assessment of tile sets for error tolerance in DNA self-assembly.

    PubMed

    Hashempour, M; Mashreghian Arani, Z; Lombardi, F

    2008-12-01

    An assessment of the effectiveness of healing for error tolerance in DNA self-assembly tile sets for algorithmic/nano-manufacturing applications is presented. Initially, the conditions for correct binding of a tile to an existing aggregate are analysed using a Markovian approach; based on this analysis, it is proved that correct aggregation (as identified with a so-called ideal tile set) is not always met for the existing tile sets for nano-manufacturing. A metric for assessing tile sets for healing by utilising punctures is proposed. Tile sets are investigated and assessed with respect to features such as error (mismatched tile) movement, punctured area and bond types. Subsequently, it is shown that the proposed metric can comprehensively assess the healing effectiveness of a puncture type for a tile set and its capability to attain error tolerance for the desired pattern. Extensive simulation results are provided.

  1. Adherence to balance tolerance limits at the Upper Mississippi Science Center, La Crosse, Wisconsin.

    USGS Publications Warehouse

    Myers, C.T.; Kennedy, D.M.

    1998-01-01

    Verification of balance accuracy entails applying a series of standard masses to a balance prior to use and recording the measured values. The recorded values for each standard should have lower and upper weight limits or tolerances that are accepted as verification of balance accuracy under normal operating conditions. Balance logbooks for seven analytical balances at the Upper Mississippi Science Center were checked over a 3.5-year period to determine if the recorded weights were within the established tolerance limits. A total of 9435 measurements were checked. There were 14 instances in which the balance malfunctioned and operators recorded a rationale in the balance logbook. Sixty-three recording errors were found. Twenty-eight operators were responsible for two types of recording errors: Measurements of weights were recorded outside of the tolerance limit but not acknowledged as an error by the operator (n = 40); and measurements were recorded with the wrong number of decimal places (n = 23). The adherence rate for following tolerance limits was 99.3%. To ensure the continued adherence to tolerance limits, the quality-assurance unit revised standard operating procedures to require more frequent review of balance logbooks.

  2. Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials

    NASA Astrophysics Data System (ADS)

    Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.

    2007-03-01

    In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.

  3. A robust pseudo-inverse spectral filter applied to the Earth Radiation Budget Experiment (ERBE) scanning channels

    NASA Technical Reports Server (NTRS)

    Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.

    1984-01-01

    Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.

  4. Fast and error-resilient coherent control in an atomic vapor

    NASA Astrophysics Data System (ADS)

    He, Yizun; Wang, Mengbing; Zhao, Jian; Qiu, Liyang; Wang, Yuzhuo; Fang, Yami; Zhao, Kaifeng; Wu, Saijun

    2017-04-01

    Nanosecond chirped pulses from an optical arbitrary waveform generator is applied to both invert and coherently split the D1 line population of potassium vapor within a laser focal volume of 2X105 μ m3. The inversion fidelity of f>96%, mainly limited by spontaneous emission during the nanosecond pulse, is inferred from both probe light transmission and superfluorescence emission. The nearly perfect inversion is uniformly achieved for laser intensity varying over an order of magnitude, and is tolerant to detuning error of more than 1000 times the D1 transition linewidth. We further demonstrate enhanced intensity error resilience with multiple chirped pulses and ``universal composite pulses''. This fast and robust coherent control technique should find wide applications in the field of quantum optics, laser cooling, and atom interferometry. This work is supported by National Key Research Program of China under Grant No. 2016YFA0302000, and NNSFC under Grant No. 11574053.

  5. Frequency analysis of DC tolerant current transformers

    NASA Astrophysics Data System (ADS)

    Mlejnek, P.; Kaspar, P.

    2013-09-01

    This article deals with wide frequency range behaviour of DC tolerant current transformers that are usually used in modern static energy meters. In this application current transformers must comply with European and International Standards in their accuracy and DC tolerance. Therefore, the linear DC tolerant current transformers and double core current transformers are used in this field. More details about the problems of these particular types of transformers can be found in our previous works. Although these transformers are designed mainly for power distribution network frequency (50/60 Hz), it can be interesting to understand their behaviour in wider frequency range. Based on this knowledge the new generations of energy meters with measuring quality of electric energy will be produced. This solution brings better measurement of consumption of nonlinear loads or measurement of non-sinusoidal voltage and current sources such as solar cells or fuel cells. The determination of actual power consumption in such energy meters is done using particular harmonics component of current and voltage. We measured the phase and ratio errors that are the most important parameters of current transformers, to characterize several samples of current transformers of both types.

  6. Performance of various branch-point tolerant phase reconstructors with finite time delays and measurement noise

    NASA Astrophysics Data System (ADS)

    Zetterlind, Virgil E., III; Magee, Eric P.

    2002-06-01

    This study extends branch point tolerant phase reconstructor research to examine the effect of finite time delays and measurement error on system performance. Branch point tolerant phase reconstruction is particularly applicable to atmospheric laser weapon and communication systems, which operate in extended turbulence. We examine the relative performance of a least squares reconstructor, least squares plus hidden phase reconstructor, and a Goldstein branch point reconstructor for various correction time-delays and measurement noise scenarios. Performance is evaluated using a wave-optics simulation that models a 100km atmospheric propagation of a point source beacon to a transmit/receive aperture. Phase-only corrections are then calculated using the various reconstructor algorithms and applied to an outgoing uniform field. Point Strehl is used as the performance metric. Results indicate that while time delays and measurement noise reduce the performance of branch point tolerant reconstructors, these reconstructors can still outperform least squares implementations in many cases. We also show that branch point detection becomes the limiting factor in measurement noise corrupted scenarios.

  7. MO-F-CAMPUS-T-03: Data Driven Approaches for Determination of Treatment Table Tolerance Values for Record and Verification Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, N; DiCostanzo, D; Fullenkamp, M

    2015-06-15

    Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations weremore » computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balderson, Michael, E-mail: michael.balderson@rmp.uhn.ca; Brown, Derek; Johnson, Patricia

    The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic–based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for themore » different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15 mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT.« less

  9. Constitutive parameter measurements of lossy materials

    NASA Technical Reports Server (NTRS)

    Dominek, A.; Park, A.

    1989-01-01

    The electrical constitutive parameters of lossy materials are considered. A discussion of the NRL arch for lossy coatings is presented involving analytical analyses of the reflected field using the geometrical theory of diffraction (GTD) and physical optics (PO). The actual values for these parameters can be obtained through a traditional transmission technique which is examined from an error analysis standpoint. Alternate sample geometries are suggested for this technique to reduce sample tolerance requirements for accurate parameter determination. The performance for one alternate geometry is given.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bahrdt, J.; Frentrup, W.; Gaupp, A.

    BESSY plans to build a SASE-FEL facility for the energy range from 20 eV to 1000 eV. The energy range will be covered by three APPLE II type undulators with a magnetic length of about 60 m each. This paper summarizes the basic parameters of the FEL-undulators. The magnetic design will be presented. A modified APPLE II design will be discussed which provides higher fields at the expense of reduced horizontal access. GENESIS simulations give an estimate on the tolerances for the beam wander and for gap errors.

  11. Fault Tolerant Signal Processing Using Finite Fields and Error-Correcting Codes.

    DTIC Science & Technology

    1983-06-01

    Decimation in Frequency Form, Fast Inverse Transform F-18 F-4 Part of Decimation in Time Form, Fast Inverse Transform F-21 I . LIST OF TABLES fable Title Page...F-2 Intermediate Variables In A Fast Inverse Transform F-14 Accession For NTIS GRA&il DTIC TAB E Unannounced El ** Dist ribut ion/ ____ AvailabilitY...component polynomials may be transformed to an equiva- lent series of multiplications of the related transform ’.. coefficients. The inverse transform of

  12. Universal fault-tolerant quantum computation with only transversal gates and error correction.

    PubMed

    Paetznick, Adam; Reichardt, Ben W

    2013-08-30

    Transversal implementations of encoded unitary gates are highly desirable for fault-tolerant quantum computation. Though transversal gates alone cannot be computationally universal, they can be combined with specially distilled resource states in order to achieve universality. We show that "triorthogonal" stabilizer codes, introduced for state distillation by Bravyi and Haah [Phys. Rev. A 86, 052329 (2012)], admit transversal implementation of the controlled-controlled-Z gate. We then construct a universal set of fault-tolerant gates without state distillation by using only transversal controlled-controlled-Z, transversal Hadamard, and fault-tolerant error correction. We also adapt the distillation procedure of Bravyi and Haah to Toffoli gates, improving on existing Toffoli distillation schemes.

  13. Fault-Tolerant Computing: An Overview

    DTIC Science & Technology

    1991-06-01

    Addison Wesley:, Reading, MA) 1984. [8] J. Wakerly , Error Detecting Codes, Self-Checking Circuits and Applications , (Elsevier North Holland, Inc.- New York... applicable to bit-sliced organi- zations of hardware. In the first time step, the normal computation is performed on the operands and the results...for error detection and fault tolerance in parallel processor systems while perform- ing specific computation-intensive applications [111. Contrary to

  14. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE PAGES

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...

    2017-02-15

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  15. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    PubMed Central

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter

    2017-01-01

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466

  16. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  17. Using concatenated quantum codes for universal fault-tolerant quantum gates.

    PubMed

    Jochym-O'Connor, Tomas; Laflamme, Raymond

    2014-01-10

    We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.

  18. A fault-tolerant information processing concept for space vehicles.

    NASA Technical Reports Server (NTRS)

    Hopkins, A. L., Jr.

    1971-01-01

    A distributed fault-tolerant information processing system is proposed, comprising a central multiprocessor, dedicated local processors, and multiplexed input-output buses connecting them together. The processors in the multiprocessor are duplicated for error detection, which is felt to be less expensive than using coded redundancy of comparable effectiveness. Error recovery is made possible by a triplicated scratchpad memory in each processor. The main multiprocessor memory uses replicated memory for error detection and correction. Local processors use any of three conventional redundancy techniques: voting, duplex pairs with backup, and duplex pairs in independent subsystems.

  19. Load Sharing Behavior of Star Gearing Reducer for Geared Turbofan Engine

    NASA Astrophysics Data System (ADS)

    Mo, Shuai; Zhang, Yidu; Wu, Qiong; Wang, Feiming; Matsumura, Shigeki; Houjoh, Haruo

    2017-07-01

    Load sharing behavior is very important for power-split gearing system, star gearing reducer as a new type and special transmission system can be used in many industry fields. However, there is few literature regarding the key multiple-split load sharing issue in main gearbox used in new type geared turbofan engine. Further mechanism analysis are made on load sharing behavior among star gears of star gearing reducer for geared turbofan engine. Comprehensive meshing error analysis are conducted on eccentricity error, gear thickness error, base pitch error, assembly error, and bearing error of star gearing reducer respectively. Floating meshing error resulting from meshing clearance variation caused by the simultaneous floating of sun gear and annular gear are taken into account. A refined mathematical model for load sharing coefficient calculation is established in consideration of different meshing stiffness and supporting stiffness for components. The regular curves of load sharing coefficient under the influence of interactions, single action and single variation of various component errors are obtained. The accurate sensitivity of load sharing coefficient toward different errors is mastered. The load sharing coefficient of star gearing reducer is 1.033 and the maximum meshing force in gear tooth is about 3010 N. This paper provides scientific theory evidences for optimal parameter design and proper tolerance distribution in advanced development and manufacturing process, so as to achieve optimal effects in economy and technology.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G.

    Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 tomore » 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot spots along the field edges, which may be near critical structures. However, random PE showed minimal dose error. Conclusions: Dose error dependence for PE was quantitatively and systematically characterized and an analytic tool was built to simulate systematic and random errors for patient-specific IMPT. This information facilitates the determination of facility specific spot position error thresholds.« less

  1. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  2. 77 FR 72984 - Buprofezin Pesticide Tolerances; Technical Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-07

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 180 [EPA-HQ-OPP-2011-0759; FRL-9371-3] Buprofezin..., 2012, concerning buprofezin pesticide tolerances. This document corrects a typographical error. DATES...: Sec. 180.511 Buprofezin; tolerances for residues. (a) * * * Parts per Commodity million...

  3. Error Tolerant Plan Recognition: An Empirical Investigation

    DTIC Science & Technology

    2015-05-01

    structure can differ drastically in semantics. For instance, a plan to travel to a grocery store to buy milk might coincidentally be structurally...algorithm for its ability to tolerate input errors, and that storing and leveraging state information in its plan representation substantially...proposed a novel representation for storing and organizing plans in a plan library, based on action-state pairs and abstract states. It counts the

  4. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  5. Implementation of an experimental fault-tolerant memory system

    NASA Technical Reports Server (NTRS)

    Carter, W. C.; Mccarthy, C. E.

    1976-01-01

    The experimental fault-tolerant memory system described in this paper has been designed to enable the modular addition of spares, to validate the theoretical fault-secure and self-testing properties of the translator/corrector, to provide a basis for experiments using the new testing and correction processes for recovery, and to determine the practicality of such systems. The hardware design and implementation are described, together with methods of fault insertion. The hardware/software interface, including a restricted single error correction/double error detection (SEC/DED) code, is specified. Procedures are carefully described which, (1) test for specified physical faults, (2) ensure that single error corrections are not miscorrections due to triple faults, and (3) enable recovery from double errors.

  6. Enhanced fault-tolerant quantum computing in d-level systems.

    PubMed

    Campbell, Earl T

    2014-12-05

    Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.

  7. Improvements to the design process for a real-time passive millimeter-wave imager to be used for base security and helicopter navigation in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Anderton, Rupert N.; Cameron, Colin D.; Burnett, James G.; Güell, Jeff J.; Sanders-Reed, John N.

    2014-06-01

    This paper discusses the design of an improved passive millimeter wave imaging system intended to be used for base security in degraded visual environments. The discussion starts with the selection of the optimum frequency band. The trade-offs between requirements on detection, recognition and identification ranges and optical aperture are discussed with reference to the Johnson Criteria. It is shown that these requirements also affect image sampling, receiver numbers and noise temperature, frame rate, field of view, focusing requirements and mechanisms, and tolerance budgets. The effect of image quality degradation is evaluated and a single testable metric is derived that best describes the effects of degradation on meeting the requirements. The discussion is extended to tolerance budgeting constraints if significant degradation is to be avoided, including surface roughness, receiver position errors and scan conversion errors. Although the reflective twist-polarization imager design proposed is potentially relatively low cost and high performance, there is a significant problem with obscuration of the beam by the receiver array. Methods of modeling this accurately and thus designing for best performance are given.

  8. SU-F-T-313: Clinical Results of a New Customer Acceptance Test for Elekta VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rusk, B; Fontenot, J

    Purpose: To report the results of a customer acceptance test (CAT) for VMAT treatments for two matched Elekta linear accelerators. Methods: The CAT tests were performed on two clinically matched Elekta linear accelerators equipped with a 160-leaf MLC. Functional tests included performance checks of the control system during dynamic movements of the diaphragms, MLC, and gantry. Dosimetric tests included MLC picket fence tests at static and variable dose rates and a diaphragm alignment test, all performed using the on-board EPID. Additionally, beam symmetry during arc delivery was measured at the four cardinal angles for high and low dose rate modesmore » using a 2D detector array. Results of the dosimetric tests were analyzed using the VMAT CAT analysis tool. Results: Linear accelerator 1 (LN1) met all stated CAT tolerances. Linear accelerator 2 (LN2) passed the geometric, beam symmetry, and MLC position error tests but failed the relative dose average test for the diaphragm abutment and all three picket fence fields. Though peak doses in the abutment regions were consistent, the average dose was below the stated tolerance corresponding to a leaf junction that was too narrow. Despite this, no significant differences in patient specific VMAT quality assurance measured were observed between the accelerators and both passed monthly MLC quality assurance performed with the Hancock test. Conclusion: Results from the CAT showed LN2 with relative dose averages in the abutment regions of the diaphragm and MLC tests outside the tolerances resulting from differences in leaf gap distances. Tolerances of the dose average tests from the CAT may be small enough to detect MLC errors which do not significantly affect patient QA or the routine MLC tests.« less

  9. Review of Pre-Analytical Errors in Oral Glucose Tolerance Testing in a Tertiary Care Hospital.

    PubMed

    Nanda, Rachita; Patel, Suprava; Sahoo, Sibashish; Mohapatra, Eli

    2018-03-13

    The pre-pre-analytical and pre-analytical phases form a major chunk of the errors in a laboratory. The process has taken into consideration a very common procedure which is the oral glucose tolerance test to identify the pre-pre-analytical errors. Quality indicators provide evidence of quality, support accountability and help in the decision making of laboratory personnel. The aim of this research is to evaluate pre-analytical performance of the oral glucose tolerance test procedure. An observational study that was conducted overa period of three months, in the phlebotomy and accessioning unit of our laboratory using questionnaire that examined the pre-pre-analytical errors through a scoring system. The pre-analytical phase was analyzed for each sample collected as per seven quality indicators. About 25% of the population gave wrong answer with regard to the question that tested the knowledge of patient preparation. The appropriateness of test result QI-1 had the most error. Although QI-5 for sample collection had a low error rate, it is a very important indicator as any wrongly collected sample can alter the test result. Evaluating the pre-analytical and pre-pre-analytical phase is essential and must be conducted routinely on a yearly basis to identify errors and take corrective action and to facilitate their gradual introduction into routine practice.

  10. Tolerance assignment in optical design

    NASA Astrophysics Data System (ADS)

    Youngworth, Richard Neil

    2002-09-01

    Tolerance assignment is necessary in any engineering endeavor because fabricated systems---due to the stochastic nature of manufacturing and assembly processes---necessarily deviate from the nominal design. This thesis addresses the problem of optical tolerancing. The work can logically be split into three different components that all play an essential role. The first part addresses the modeling of manufacturing errors in contemporary fabrication and assembly methods. The second component is derived from the design aspect---the development of a cost-based tolerancing procedure. The third part addresses the modeling of image quality in an efficient manner that is conducive to the tolerance assignment process. The purpose of the first component, modeling manufacturing errors, is twofold---to determine the most critical tolerancing parameters and to understand better the effects of fabrication errors. Specifically, mid-spatial-frequency errors, typically introduced in sub-aperture grinding and polishing fabrication processes, are modeled. The implication is that improving process control and understanding better the effects of the errors makes the task of tolerance assignment more manageable. Conventional tolerancing methods do not directly incorporate cost. Consequently, tolerancing approaches tend to focus more on image quality. The goal of the second part of the thesis is to develop cost-based tolerancing procedures that facilitate optimum system fabrication by generating the loosest acceptable tolerances. This work has the potential to impact a wide range of optical designs. The third element, efficient modeling of image quality, is directly related to the cost-based optical tolerancing method. Cost-based tolerancing requires efficient and accurate modeling of the effects of errors on the performance of optical systems. Thus it is important to be able to compute the gradient and the Hessian, with respect to the parameters that need to be toleranced, of the figure of merit that measures the image quality of a system. An algebraic method for computing the gradient and the Hessian is developed using perturbation theory.

  11. Influence of ECG measurement accuracy on ECG diagnostic statements.

    PubMed

    Zywietz, C; Celikag, D; Joseph, G

    1996-01-01

    Computer analysis of electrocardiograms (ECGs) provides a large amount of ECG measurement data, which may be used for diagnostic classification and storage in ECG databases. Until now, neither error limits for ECG measurements have been specified nor has their influence on diagnostic statements been systematically investigated. An analytical method is presented to estimate the influence of measurement errors on the accuracy of diagnostic ECG statements. Systematic (offset) errors will usually result in an increase of false positive or false negative statements since they cause a shift of the working point on the receiver operating characteristics curve. Measurement error dispersion broadens the distribution function of discriminative measurement parameters and, therefore, usually increases the overlap between discriminative parameters. This results in a flattening of the receiver operating characteristics curve and an increase of false positive and false negative classifications. The method developed has been applied to ECG conduction defect diagnoses by using the proposed International Electrotechnical Commission's interval measurement tolerance limits. These limits appear too large because more than 30% of false positive atrial conduction defect statements and 10-18% of false intraventricular conduction defect statements could be expected due to tolerated measurement errors. To assure long-term usability of ECG measurement databases, it is recommended that systems provide its error tolerance limits obtained on a defined test set.

  12. LANDSAT-4/5 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Malaret, E.; Bartolucci, L. A.; Lozano, D. F.; Anuta, P. E.; Mcgillem, C. D.

    1984-01-01

    A LANDSAT Thematic Mapper (TM) quality evaluation study was conducted to identify geometric and radiometric sensor errors in the post-launch environment. The study began with the launch of LANDSAT-4. Several error conditions were found, including band-to-band misregistration and detector-to detector radiometric calibration errors. Similar analysis was made for the LANDSAT-5 Thematic Mapper and compared with results for LANDSAT-4. Remaining band-to-band misregistration was found to be within tolerances and detector-to-detector calibration errors were not severe. More coherent noise signals were observed in TM-5 than in TM-4, although the amplitude was generally less. The scan direction differences observed in TM-4 were still evident in TM-5. The largest effect was in Band 4 where nearly a one digital count difference was observed. Resolution estimation was carried out using roads in TM-5 for the primary focal plane bands rather than field edges as in TM-4. Estimates using roads gave better resolution. Thermal IR band calibration studies were conducted and new nonlinear calibration procedures were defined for TM-5. The overall conclusion is that there are no first order errors in TM-5 and any remaining problems are second or third order.

  13. Fault tolerant software modules for SIFT

    NASA Technical Reports Server (NTRS)

    Hecht, M.; Hecht, H.

    1982-01-01

    The implementation of software fault tolerance is investigated for critical modules of the Software Implemented Fault Tolerance (SIFT) operating system to support the computational and reliability requirements of advanced fly by wire transport aircraft. Fault tolerant designs generated for the error reported and global executive are examined. A description of the alternate routines, implementation requirements, and software validation are included.

  14. Center-to-Limb Variation of Deprojection Errors in SDO/HMI Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Falconer, David; Moore, Ronald; Barghouty, Nasser; Tiwari, Sanjiv K.; Khazanov, Igor

    2015-04-01

    For use in investigating the magnetic causes of coronal heating in active regions and for use in forecasting an active region’s productivity of major CME/flare eruptions, we have evaluated various sunspot-active-region magnetic measures (e.g., total magnetic flux, free-magnetic-energy proxies, magnetic twist measures) from HMI Active Region Patches (HARPs) after the HARP has been deprojected to disk center. From a few tens of thousand HARP vector magnetograms (of a few hundred sunspot active regions) that have been deprojected to disk center, we have determined that the errors in the whole-HARP magnetic measures from deprojection are negligibly small for HARPS deprojected from distances out to 45 heliocentric degrees. For some purposes the errors from deprojection are tolerable out to 60 degrees. We obtained this result by the following process. For each whole-HARP magnetic measure: 1) for each HARP disk passage, normalize the measured values by the measured value for that HARP at central meridian; 2) then for each 0.05 Rs annulus, average the values from all the HARPs in the annulus. This results in an average normalized value as a function of radius for each measure. Assuming no deprojection errors and that, among a large set of HARPs, the measure is as likely to decrease as to increase with HARP distance from disk center, the average of each annulus is expected to be unity, and, for a statistically large sample, the amount of deviation of the average from unity estimates the error from deprojection effects. The deprojection errors arise from 1) errors in the transverse field being deprojected into the vertical field for HARPs observed at large distances from disk center, 2) increasingly larger foreshortening at larger distances from disk center, and 3) possible errors in transverse-field-direction ambiguity resolution.From the compiled set of measured vales of whole-HARP magnetic nonpotentiality parameters measured from deprojected HARPs, we have examined the relation between each nonpotentiality parameter and the speed of CMEs from the measured active regions. For several different nonpotentiality parameters we find there is an upper limit to the CME speed, the limit increasing as the value of the parameter increases.

  15. Advanced Information Processing System - Fault detection and error handling

    NASA Technical Reports Server (NTRS)

    Lala, J. H.

    1985-01-01

    The Advanced Information Processing System (AIPS) is designed to provide a fault tolerant and damage tolerant data processing architecture for a broad range of aerospace vehicles, including tactical and transport aircraft, and manned and autonomous spacecraft. A proof-of-concept (POC) system is now in the detailed design and fabrication phase. This paper gives an overview of a preliminary fault detection and error handling philosophy in AIPS.

  16. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  17. Wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope system

    NASA Astrophysics Data System (ADS)

    Wei, Kai; Zhang, Xuejun; Xian, Hao; Rao, Changhui; Zhang, Yudong

    2010-05-01

    We present the wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope. The error budget accounts for aberrations induced by optical design residual, manufacturing error, mounting effects, and misalignments. The initial error budget has been generated from the top-down. There will also be an ongoing effort to track the errors from the bottom-up. This will aid in identifying critical areas of concern. The resolution of conflicts will involve a continual process of review and comparison of the top-down and bottom-up approaches, modifying both as needed to meet the top level requirements in the end. As we all know, the adaptive optical system will correct for some of the telescope system imperfections but it cannot be assumed that all errors will be corrected. Therefore, two kinds of error budgets will be presented, one is non-AO top-down error budget and the other is with-AO system error budget. The main advantage of the method is that at the same time it describes the final performance of the telescope, and gives to the optical manufacturer the maximum freedom to define and possibly modify its own manufacturing error budget.

  18. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    NASA Astrophysics Data System (ADS)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  19. Experimental Demonstration of Fault-Tolerant State Preparation with Superconducting Qubits.

    PubMed

    Takita, Maika; Cross, Andrew W; Córcoles, A D; Chow, Jerry M; Gambetta, Jay M

    2017-11-03

    Robust quantum computation requires encoding delicate quantum information into degrees of freedom that are hard for the environment to change. Quantum encodings have been demonstrated in many physical systems by observing and correcting storage errors, but applications require not just storing information; we must accurately compute even with faulty operations. The theory of fault-tolerant quantum computing illuminates a way forward by providing a foundation and collection of techniques for limiting the spread of errors. Here we implement one of the smallest quantum codes in a five-qubit superconducting transmon device and demonstrate fault-tolerant state preparation. We characterize the resulting code words through quantum process tomography and study the free evolution of the logical observables. Our results are consistent with fault-tolerant state preparation in a protected qubit subspace.

  20. Integrated Data and Control Level Fault Tolerance Techniques for Signal Processing Computer Design

    DTIC Science & Technology

    1990-09-01

    TOLERANCE TECHNIQUES FOR SIGNAL PROCESSING COMPUTER DESIGN G. Robert Redinbo I. INTRODUCTION High-speed signal processing is an important application of...techniques and mathematical approaches will be expanded later to the situation where hardware errors and roundoff and quantization noise affect all...detect errors equal in number to the degree of g(X), the maximum permitted by the Singleton bound [13]. Real cyclic codes, primarily applicable to

  1. Fault-tolerant simple quantum-bit commitment unbreakable by individual attacks

    NASA Astrophysics Data System (ADS)

    Shimizu, Kaoru; Imoto, Nobuyuki

    2002-03-01

    This paper proposes a simple scheme for quantum-bit commitment that is secure against individual particle attacks, where a sender is unable to use quantum logical operations to manipulate multiparticle entanglement for performing quantum collective and coherent attacks. Our scheme employs a cryptographic quantum communication channel defined in a four-dimensional Hilbert space and can be implemented by using single-photon interference. For an ideal case of zero-loss and noiseless quantum channels, our basic scheme relies only on the physical features of quantum states. Moreover, as long as the bit-flip error rates are sufficiently small (less than a few percent), we can improve our scheme and make it fault tolerant by adopting simple error-correcting codes with a short length. Compared with the well-known Brassard-Crepeau-Jozsa-Langlois 1993 (BCJL93) protocol, our scheme is mathematically far simpler, more efficient in terms of transmitted photon number, and better tolerant of bit-flip errors.

  2. 77 FR 32400 - Fenamidone; Pesticide Tolerance; Technical Amendment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-01

    ...., of the preamble, the text correctly listed the tolerance level for the commodity ``strawberry'' at 0... the tolerance level for ``strawberry'' at 0.15. This technical amendment corrects that error. III. Why... revising the entry for ``Strawberry'' in paragraph (d) to read as follows: Sec. [emsp14]180.579 Fenamidone...

  3. Design and tolerance analysis of a transmission sphere by interferometer model

    NASA Astrophysics Data System (ADS)

    Peng, Wei-Jei; Ho, Cheng-Fong; Lin, Wen-Lung; Yu, Zong-Ru; Huang, Chien-Yao; Hsu, Wei-Yao

    2015-09-01

    The design of a 6-in, f/2.2 transmission sphere for Fizeau interferometry is presented in this paper. To predict the actual performance during design phase, we build an interferometer model combined with tolerance analysis in Zemax. Evaluating focus imaging is not enough for a double pass optical system. Thus, we study the interferometer model that includes system error, wavefronts reflected from reference surface and tested surface. Firstly, we generate a deformation map of the tested surface. Because of multiple configurations in Zemax, we can get the test wavefront and the reference wavefront reflected from the tested surface and the reference surface of transmission sphere respectively. According to the theory of interferometry, we subtract both wavefronts to acquire the phase of tested surface. Zernike polynomial is applied to transfer the map from phase to sag and to remove piston, tilt and power. The restored map is the same as original map; because of no system error exists. Secondly, perturbed tolerances including fabrication of lenses and assembly are considered. The system error occurs because the test and reference beam are no longer common path perfectly. The restored map is inaccurate while the system error is added. Although the system error can be subtracted by calibration, it should be still controlled within a small range to avoid calibration error. Generally the reference wavefront error including the system error and the irregularity of the reference surface of 6-in transmission sphere is measured within peak-to-valley (PV) 0.1 λ (λ=0.6328 um), which is not easy to approach. Consequently, it is necessary to predict the value of system error before manufacture. Finally, a prototype is developed and tested by a reference surface with PV 0.1 λ irregularity.

  4. Under conditions of large geometric miss, tumor control probability can be higher for static gantry intensity-modulated radiation therapy compared to volume-modulated arc therapy for prostate cancer.

    PubMed

    Balderson, Michael; Brown, Derek; Johnson, Patricia; Kirkby, Charles

    2016-01-01

    The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic-based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for the different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  5. Modelling exoplanet detection with the LUVOIR Coronagraph: aberration sensitivity and error tolerances

    NASA Astrophysics Data System (ADS)

    Juanola-Parramon, Roser; Zimmerman, Neil; Bolcar, Matthew R.; Rizzo, Maxime; Roberge, Aki

    2018-01-01

    The Coronagraph is a key instrument on the Large UV-Optical-Infrared (LUVOIR) Surveyor mission concept. The Apodized Pupil Lyot Coronagraph (APLC) is one of the baselined mask technologies to enable 1E10 contrast observations in the habitable zones of nearby stars. Both the LUVOIR architectures A and B present a segmented aperture as input pupil, introducing a set of random tip/tilt and piston errors, among others, that greatly affect the performance of the coronagraph instrument by increasing the wavefront errors hence reducing the instrument sensitivity. In this poster we present the latest results of the simulation of these effects for different working angle regions and discuss the achieved contrast for exoplanet detection and characterization, including simulated observations under these circumstances, setting boundaries for the tolerance of such errors.

  6. Experimental magic state distillation for fault-tolerant quantum computing.

    PubMed

    Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond

    2011-01-25

    Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.

  7. Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping

    NASA Astrophysics Data System (ADS)

    Piedrafita, Álvaro; Renes, Joseph M.

    2017-12-01

    We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.

  8. Tracking of the magnet system geometry during Wendelstein 7-X construction to achieve the designed magnetic field

    NASA Astrophysics Data System (ADS)

    Andreeva, T.; Bräuer, T.; Bykov, V.; Egorov, K.; Endler, M.; Fellinger, J.; Kißlinger, J.; Köppen, M.; Schauer, F.

    2015-06-01

    Wendelstein 7-X, currently under commissioning at the Max-Planck-Institut für Plasmaphysik in Greifswald, Germany, is a modular advanced stellarator, combining the modular coil concept with optimized properties of the plasma. Most of the envisaged magnetic configurations of the machine are rather sensitive to symmetry breaking perturbations which are the consequence of unavoidable manufacturing and assembly tolerances. This overview describes the successive tracking of the Wendelstein 7-X magnet system geometry starting from the manufacturing of the winding packs up to the modelling of the influence of operation loads. The deviations found were used to calculate the resulting error fields and to compare them with the compensation capacity of the trim coils.

  9. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

  10. Near-field Testing of the 15-meter Model of the Hoop Column Antenna

    NASA Technical Reports Server (NTRS)

    Hoover, J.; Kefauver, N.; Cencich, T.; Osborn, J.; Osmanski, J.

    1986-01-01

    The technical results from near-field testing of the 15-meter model of the hoop column antenna at the Martin Marietta Denver Aerospace facility are documented. The antenna consists of a deployable central column and a 15 meter hoop, stiffened by cables into a structure with a high tolerance repeatable surface and offset feed location. The surface has been configured to have four offset parabolic apertures, each about 6 meters in diameter, and is made of gold plated molybdenum wire mesh. Pattern measurements were made with feed systems radiating at frequencies of 7.73, 11.60, 2.27, 2.225, and 4.26 (all in GHz). This report (Volume 1) covers the testing from an overall viewpoint and contains information of generalized interest for testing large antennas. This volume discusses the deployment of the antenna in the Martin Facility and the measurements to determine mechanical stability and trueness of the reflector surface, gives the test program outline, and gives a synopsis of antenna electromagnetic performance. Three techniques for measuring surface mechanical tolerances were used (theodolites, metric cameras, and near-field phase), but only the near-field phase approach is included. The report also includes an error analysis. A detailed listing of the antenna patterns are provided for the 2.225 Ghz feed in Volume 3 of this report, and for all other feeds in Volume 2.

  11. Sensor Data Distribution With Robustness and Reliability: Toward Distributed Components Model

    NASA Technical Reports Server (NTRS)

    Alena, Richard L.; Lee, Charles

    2005-01-01

    In planetary surface exploration mission, sensor data distribution is required in many aspects, for example, in navigation, scheduling, planning, monitoring, diagnostics, and automation of the field tasks. The challenge is to distribute such data in the robust and reliable way so that we can minimize the errors caused by miscalculations, and misjudgments that based on the error data input in the mission. The ad-hoc wireless network on planetary surface is not constantly connected because of the nature of the rough terrain and lack of permanent establishments on the surface. There are some disconnected moments that the computation nodes will re-associate with different repeaters or access points until connections are reestablished. Such a nature requires our sensor data distribution software robust and reliable with ability to tolerant disconnected moments. This paper presents a distributed components model as a framework to accomplish such tasks. The software is written in Java and utilized the available Java Message Services schema and the Boss implementation. The results of field experimentations show that the model is very effective in completing the tasks.

  12. Radiation Tolerant Intelligent Memory Stack (RTIMS)

    NASA Technical Reports Server (NTRS)

    Ng, Tak-kwong; Herath, Jeffrey A.

    2006-01-01

    The Radiation Tolerant Intelligent Memory Stack (RTIMS), suitable for both geostationary and low earth orbit missions, has been developed. The memory module is fully functional and undergoing environmental and radiation characterization. A self-contained flight-like module is expected to be completed in 2006. RTIMS provides reconfigurable circuitry and 2 gigabits of error corrected or 1 gigabit of triple redundant digital memory in a small package. RTIMS utilizes circuit stacking of heterogeneous components and radiation shielding technologies. A reprogrammable field programmable gate array (FPGA), six synchronous dynamic random access memories, linear regulator, and the radiation mitigation circuitries are stacked into a module of 42.7mm x 42.7mm x 13.00mm. Triple module redundancy, current limiting, configuration scrubbing, and single event function interrupt detection are employed to mitigate radiation effects. The mitigation techniques significantly simplify system design. RTIMS is well suited for deployment in real-time data processing, reconfigurable computing, and memory intensive applications.

  13. Undulator performance on PEP storage ring with different optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shenoy, G.K.; Viccaro, P.J.; Alp, E.E.

    1987-12-01

    Our ability to profitably utilize the radiation from undulators proposed for PEP is determined by their performance in the different operating modes and by whether the design tolerance required for acceptable operation of the device can be met with available technology. The purpose of this paper is to provide spectral characteristics for some typical devices calculated using a Monte-Carlo algorithm in which the Lienard-Wiechert potential is integrated over the trajectory of the charged particle along the undulator length. The actual emittance of the particle beam for the various PEP operating modes are included explicitly in the simulations. In addition, wemore » have carried out a single partial analysis of the effects of undulator magnetic field errors on the spectral properties in order to estimate the design tolerance requirements necessary for devices which have been proposed PEP. 6 figs.« less

  14. Software fault tolerance in computer operating systems

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  15. SU-F-J-55: Feasibility of Supraclavicular Field Treatment by Investigating Variation of Junction Position Between Breast Tangential and Supraclavicular Fields for Deep Inspiration Breath Hold (DIBH) Left Breast Radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, H; Sarkar, V; Paxton, A

    Purpose: To explore the feasibility of supraclavicular field treatment by investigating the variation of junction position between tangential and supraclavicular fields during left breast radiation using DIBH technique. Methods: Six patients with left breast cancer treated using DIBH technique were included in this study. AlignRT system was used to track patient’s breast surface. During daily treatment, when the patient’s DIBH reached preset AlignRT tolerance of ±3mm for all principle directions (vertical, longitudinal, and lateral), the remaining longitudinal offset was recorded. The average with standard-deviation and the range of daily longitudinal offset for the entire treatment course were calculated for allmore » six patients (93 fractions totally). The ranges of average ± 1σ and 2σ were calculated, and they represent longitudinal field edge error with the confidence level of 68% and 95%. Based on these longitudinal errors, dose at junction between breast tangential and supraclavicular fields with variable gap/overlap sizes was calculated as a percentage of prescription (on a representative patient treatment plan). Results: The average of longitudinal offset for all patients is 0.16±1.32mm, and the range of longitudinal offset is −2.6 to 2.6mm. The range of longitudinal field edge error at 68% confidence level is −1.48 to 1.16mm, and at 95% confidence level is −2.80 to 2.48mm. With a 5mm and 1mm gap, the junction dose could be as low as 37.5% and 84.9% of prescription dose; with a 5mm and 1mm overlap, the junction dose could be as high as 169.3% and 117.6%. Conclusion: We observed longitudinal field edge error at 95% confidence level is about ±2.5mm, and the junction dose could reach 70% hot/cold between different DIBH. However, over the entire course of treatment, the average junction variation for all patients is within 0.2mm. The results from our study shows it is potentially feasible to treat supraclavicular field with breast tangents.« less

  16. An automated, quantitative, and case-specific evaluation of deformable image registration in computed tomography images

    NASA Astrophysics Data System (ADS)

    Kierkels, R. G. J.; den Otter, L. A.; Korevaar, E. W.; Langendijk, J. A.; van der Schaaf, A.; Knopf, A. C.; Sijtsema, N. M.

    2018-02-01

    A prerequisite for adaptive dose-tracking in radiotherapy is the assessment of the deformable image registration (DIR) quality. In this work, various metrics that quantify DIR uncertainties are investigated using realistic deformation fields of 26 head and neck and 12 lung cancer patients. Metrics related to the physiologically feasibility (the Jacobian determinant, harmonic energy (HE), and octahedral shear strain (OSS)) and numerically robustness of the deformation (the inverse consistency error (ICE), transitivity error (TE), and distance discordance metric (DDM)) were investigated. The deformable registrations were performed using a B-spline transformation model. The DIR error metrics were log-transformed and correlated (Pearson) against the log-transformed ground-truth error on a voxel level. Correlations of r  ⩾  0.5 were found for the DDM and HE. Given a DIR tolerance threshold of 2.0 mm and a negative predictive value of 0.90, the DDM and HE thresholds were 0.49 mm and 0.014, respectively. In conclusion, the log-transformed DDM and HE can be used to identify voxels at risk for large DIR errors with a large negative predictive value. The HE and/or DDM can therefore be used to perform automated quality assurance of each CT-based DIR for head and neck and lung cancer patients.

  17. Fault-tolerance thresholds for the surface code with fabrication errors

    NASA Astrophysics Data System (ADS)

    Auger, James M.; Anwar, Hussain; Gimeno-Segovia, Mercedes; Stace, Thomas M.; Browne, Dan E.

    2017-10-01

    The construction of topological error correction codes requires the ability to fabricate a lattice of physical qubits embedded on a manifold with a nontrivial topology such that the quantum information is encoded in the global degrees of freedom (i.e., the topology) of the manifold. However, the manufacturing of large-scale topological devices will undoubtedly suffer from fabrication errors—permanent faulty components such as missing physical qubits or failed entangling gates—introducing permanent defects into the topology of the lattice and hence significantly reducing the distance of the code and the quality of the encoded logical qubits. In this work we investigate how fabrication errors affect the performance of topological codes, using the surface code as the test bed. A known approach to mitigate defective lattices involves the use of primitive swap gates in a long sequence of syndrome extraction circuits. Instead, we show that in the presence of fabrication errors the syndrome can be determined using the supercheck operator approach and the outcome of the defective gauge stabilizer generators without any additional computational overhead or use of swap gates. We report numerical fault-tolerance thresholds in the presence of both qubit fabrication and gate fabrication errors using a circuit-based noise model and the minimum-weight perfect-matching decoder. Our numerical analysis is most applicable to two-dimensional chip-based technologies, but the techniques presented here can be readily extended to other topological architectures. We find that in the presence of 8 % qubit fabrication errors, the surface code can still tolerate a computational error rate of up to 0.1 % .

  18. A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hursey, Joshua J; Naughton, III, Thomas J; Vallee, Geoffroy R

    The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiCostanzo, D; Ayan, A; Woollard, J

    Purpose: To automate the daily verification of each patient’s treatment by utilizing the trajectory log files (TLs) written by the Varian TrueBeam linear accelerator while reducing the number of false positives including jaw and gantry positioning errors, that are displayed in the Treatment History tab of Varian’s Chart QA module. Methods: Small deviations in treatment parameters are difficult to detect in weekly chart checks, but may be significant in reducing delivery errors, and would be critical if detected daily. Software was developed in house to read TLs. Multiple functions were implemented within the software that allow it to operate viamore » a GUI to analyze TLs, or as a script to run on a regular basis. In order to determine tolerance levels for the scripted analysis, 15,241 TLs from seven TrueBeams were analyzed. The maximum error of each axis for each TL was written to a CSV file and statistically analyzed to determine the tolerance for each axis accessible in the TLs to flag for manual review. The software/scripts developed were tested by varying the tolerance values to ensure veracity. After tolerances were determined, multiple weeks of manual chart checks were performed simultaneously with the automated analysis to ensure validity. Results: The tolerance values for the major axis were determined to be, 0.025 degrees for the collimator, 1.0 degree for the gantry, 0.002cm for the y-jaws, 0.01cm for the x-jaws, and 0.5MU for the MU. The automated verification of treatment parameters has been in clinical use for 4 months. During that time, no errors in machine delivery of the patient treatments were found. Conclusion: The process detailed here is a viable and effective alternative to manually checking treatment parameters during weekly chart checks.« less

  20. Extreme Universe Space Observatory (EUSO) Optics Module

    NASA Technical Reports Server (NTRS)

    Young, Roy; Christl, Mark

    2008-01-01

    A demonstration part will be manufactured in Japan on one of the large Toshiba machines with a diameter of 2.5 meters. This will be a flat PMMA disk that is cut between 0.5 and 1.25 meters radius. The cut should demonstrate manufacturing the most difficult parts of the 2.5 meter Fresnel pattern and the blazed grating on the diffractive surface. Optical simulations, validated with the subscale prototype, will be used to determine the limits on manufacturing errors (tolerances) that will result in optics that meet EUSO s requirements. There will be limits on surface roughness (or errors at high spatial frequency); radial and azimuthal slope errors (at lower spatial frequencies) and plunge cut depth errors in the blazed grating. The demonstration part will be measured to determine whether it was made within the allowable tolerances.

  1. Fault tolerance in an inner-outer solver: A GVR-enabled case study

    DOE PAGES

    Zhang, Ziming; Chien, Andrew A.; Teranishi, Keita

    2015-04-18

    Resilience is a major challenge for large-scale systems. It is particularly important for iterative linear solvers, since they take much of the time of many scientific applications. We show that single bit flip errors in the Flexible GMRES iterative linear solver can lead to high computational overhead or even failure to converge to the right answer. Informed by these results, we design and evaluate several strategies for fault tolerance in both inner and outer solvers appropriate across a range of error rates. We implement them, extending Trilinos’ solver library with the Global View Resilience (GVR) programming model, which provides multi-streammore » snapshots, multi-version data structures with portable and rich error checking/recovery. Lastly, experimental results validate correct execution with low performance overhead under varied error conditions.« less

  2. Accurate recapture identification for genetic mark–recapture studies with error-tolerant likelihood-based match calling and sample clustering

    USGS Publications Warehouse

    Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.

    2016-01-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  3. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure.

    PubMed

    Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz

    2018-01-01

    Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure).

  4. Improved acid tolerance of Lactobacillus pentosus by error-prone whole genome amplification.

    PubMed

    Ye, Lidan; Zhao, Hua; Li, Zhi; Wu, Jin Chuan

    2013-05-01

    Acid tolerance of Lactobacillus pentosus ATCC 8041 was improved by error-prone amplification of its genomic DNA using random primers and Taq DNA polymerase. The resulting amplification products were transferred into wild-type L. pentosus by electroporation and the transformants were screened for growth on low-pH agar plates. After only one round of mutation, one mutant (MT3) was identified that was able to completely consume 20 g/L of glucose to produce lactic acid at a yield of 95% in 1L MRS medium at pH 3.8 within 36 h, whereas no growth or lactic acid production was observed for the wild-type strain under the same conditions. The acid tolerance of mutant MT3 remained genetically stable for at least 25 subcultures. Therefore, the error-prone whole genome amplification technique is a very powerful tool for improving phenotypes of this lactic acid bacterium and may also be applicable for other microorganisms. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. SU-F-J-25: Position Monitoring for Intracranial SRS Using BrainLAB ExacTrac Snap Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, S; McCaw, T; Huq, M

    2016-06-15

    Purpose: To determine the accuracy of position monitoring with BrainLAB ExacTrac snap verification following couch rotations during intracranial SRS. Methods: A CT scan of an anthropomorphic head phantom was acquired using 1.25mm slices. The isocenter was positioned near the centroid of the frontal lobe. The head phantom was initially aligned on the treatment couch using cone-beam CT, then repositioned using ExacTrac x-ray verification with residual errors less than 0.2mm and 0.2°. Snap verification was performed over the full range of couch angles in 15° increments with known positioning offsets of 0–3mm applied to the phantom along each axis. At eachmore » couch angle, the smallest tolerance was determined for which no positioning deviation was detected. Results: For couch angles 30°–60° from the center position, where the longitudinal axis of the phantom is approximately aligned with the beam axis of one x-ray tube, snap verification consistently detected positioning errors exceeding the maximum 8mm tolerance. Defining localization error as the difference between the known offset and the minimum tolerance for which no deviation was detected, the RMS error is mostly less than 1mm outside of couch angles 30°–60° from the central couch position. Given separate measurements of patient position from the two imagers, whether to proceed with treatment can be determined by the criterion of a reading within tolerance from just one (OR criterion) or both (AND criterion) imagers. Using a positioning tolerance of 1.5mm, snap verification has sensitivity and specificity of 94% and 75%, respectively, with the AND criterion, and 67% and 93%, respectively, with the OR criterion. If readings exceeding maximum tolerance are excluded, the sensitivity and specificity are 88% and 86%, respectively, with the AND criterion. Conclusion: With a positioning tolerance of 1.5mm, ExacTrac snap verification can be used during intracranial SRS with sensitivity and specificity between 85% and 90%.« less

  6. Nonadiabatic conditional geometric phase shift with NMR.

    PubMed

    Xiang-Bin, W; Keiji, M

    2001-08-27

    A conditional geometric phase shift gate, which is fault tolerant to certain types of errors due to its geometric nature, was realized recently via nuclear magnetic resonance (NMR) under adiabatic conditions. However, in quantum computation, everything must be completed within the decoherence time. The adiabatic condition makes any fast conditional Berry phase (cyclic adiabatic geometric phase) shift gate impossible. Here we show that by using a newly designed sequence of simple operations with an additional vertical magnetic field, the conditional geometric phase shift gate can be run nonadiabatically. Therefore geometric quantum computation can be done at the same rate as usual quantum computation.

  7. A new approach to implement absorbing boundary condition in biomolecular electrostatics.

    PubMed

    Goni, Md Osman

    2013-01-01

    This paper discusses a novel approach to employ the absorbing boundary condition in conjunction with the finite-element method (FEM) in biomolecular electrostatics. The introduction of Bayliss-Turkel absorbing boundary operators in electromagnetic scattering problem has been incorporated by few researchers. However, in the area of biomolecular electrostatics, this boundary condition has not been investigated yet. The objective of this paper is twofold. First, to solve nonlinear Poisson-Boltzmann equation using Newton's method and second, to find an efficient and acceptable solution with minimum number of unknowns. In this work, a Galerkin finite-element formulation is used along with a Bayliss-Turkel absorbing boundary operator that explicitly accounts for the open field problem by mapping the Sommerfeld radiation condition from the far field to near field. While the Bayliss-Turkel condition works well when the artificial boundary is far from the scatterer, an acceptable tolerance of error can be achieved with the second order operator. Numerical results on test case with simple sphere show that the treatment is able to reach the same level of accuracy achieved by the analytical method while using a lower grid density. Bayliss-Turkel absorbing boundary condition (BTABC) combined with the FEM converges to the exact solution of scattering problems to within discretization error.

  8. Tolerance analysis of null lenses using an end-use system performance criterion

    NASA Astrophysics Data System (ADS)

    Rodgers, J. Michael

    2000-07-01

    An effective method of assigning tolerances to a null lens is to determine the effects of null-lens fabrication and alignment errors on the end-use system itself, not simply the null lens. This paper describes a method to assign null- lens tolerances based on their effect on any performance parameter of the end-use system.

  9. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  10. Furfural-tolerant Zymomonas mobilis derived from error-prone PCR-based whole genome shuffling and their tolerant mechanism.

    PubMed

    Huang, Suzhen; Xue, Tingli; Wang, Zhiquan; Ma, Yuanyuan; He, Xueting; Hong, Jiefang; Zou, Shaolan; Song, Hao; Zhang, Minhua

    2018-04-01

    Furfural-tolerant strain is essential for the fermentative production of biofuels or chemicals from lignocellulosic biomass. In this study, Zymomonas mobilis CP4 was for the first time subjected to error-prone PCR-based whole genome shuffling, and the resulting mutants F211 and F27 that could tolerate 3 g/L furfural were obtained. The mutant F211 under various furfural stress conditions could rapidly grow when the furfural concentration reduced to 1 g/L. Meanwhile, the two mutants also showed higher tolerance to high concentration of glucose than the control strain CP4. Genome resequencing revealed that the F211 and F27 had 12 and 13 single-nucleotide polymorphisms. The activity assay demonstrated that the activity of NADH-dependent furfural reductase in mutant F211 and CP4 was all increased under furfural stress, and the activity peaked earlier in mutant than in control. Also, furfural level in the culture of F211 was also more rapidly decreased. These indicate that the increase in furfural tolerance of the mutants may be resulted from the enhanced NADH-dependent furfural reductase activity during early log phase, which could lead to an accelerated furfural detoxification process in mutants. In all, we obtained Z. mobilis mutants with enhanced furfural and high concentration of glucose tolerance, and provided valuable clues for the mechanism of furfural tolerance and strain development.

  11. Achievable flatness in a large microwave power transmitting antenna

    NASA Technical Reports Server (NTRS)

    Ried, R. C.

    1980-01-01

    A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.

  12. Markov chain algorithms: a template for building future robust low-power systems

    PubMed Central

    Deka, Biplab; Birklykke, Alex A.; Duwe, Henry; Mansinghka, Vikash K.; Kumar, Rakesh

    2014-01-01

    Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems. PMID:24842030

  13. Coherent Oscillations inside a Quantum Manifold Stabilized by Dissipation

    NASA Astrophysics Data System (ADS)

    Touzard, S.; Grimm, A.; Leghtas, Z.; Mundhada, S. O.; Reinhold, P.; Axline, C.; Reagor, M.; Chou, K.; Blumoff, J.; Sliwa, K. M.; Shankar, S.; Frunzio, L.; Schoelkopf, R. J.; Mirrahimi, M.; Devoret, M. H.

    2018-04-01

    Manipulating the state of a logical quantum bit (qubit) usually comes at the expense of exposing it to decoherence. Fault-tolerant quantum computing tackles this problem by manipulating quantum information within a stable manifold of a larger Hilbert space, whose symmetries restrict the number of independent errors. The remaining errors do not affect the quantum computation and are correctable after the fact. Here we implement the autonomous stabilization of an encoding manifold spanned by Schrödinger cat states in a superconducting cavity. We show Zeno-driven coherent oscillations between these states analogous to the Rabi rotation of a qubit protected against phase flips. Such gates are compatible with quantum error correction and hence are crucial for fault-tolerant logical qubits.

  14. Zero tolerance prescribing: a strategy to reduce prescribing errors on the paediatric intensive care unit.

    PubMed

    Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark

    2012-11-01

    To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.

  15. Study of fault tolerant software technology for dynamic systems

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Zacharias, G. L.

    1985-01-01

    The major aim of this study is to investigate the feasibility of using systems-based failure detection isolation and compensation (FDIC) techniques in building fault-tolerant software and extending them, whenever possible, to the domain of software fault tolerance. First, it is shown that systems-based FDIC methods can be extended to develop software error detection techniques by using system models for software modules. In particular, it is demonstrated that systems-based FDIC techniques can yield consistency checks that are easier to implement than acceptance tests based on software specifications. Next, it is shown that systems-based failure compensation techniques can be generalized to the domain of software fault tolerance in developing software error recovery procedures. Finally, the feasibility of using fault-tolerant software in flight software is investigated. In particular, possible system and version instabilities, and functional performance degradation that may occur in N-Version programming applications to flight software are illustrated. Finally, a comparative analysis of N-Version and recovery block techniques in the context of generic blocks in flight software is presented.

  16. Design and implementation of a system for treating paediatric patients with stereotactically-guided conformal radiotherapy.

    PubMed

    Adams, E J; Suter, B L; Warrington, A P; Black, P; Saran, F; Brada, M

    2001-09-01

    Stereotactically-guided conformal radiotherapy (SCRT) allows the delivery of highly conformal dose distributions to localised brain tumours. This is of particular importance for children, whose often excellent long-term prognosis should be accompanied by low toxicity. The commercial immobilisation system in use at our hospital for adults was felt to be too heavy for children, and precluded the use of anaesthesia, which is sometimes required for paediatric patients. This paper therefore describes the design and implementation of a system for treating children with SCRT. This system needed to be well tolerated by patients, with good access for treating typical childhood malignancies. A lightweight frame was developed for immobilisation, with a shell-based alternative for patients requiring general anaesthetic. Procedures were set up to introduce the patients to the frame system in order to maximise patient co-operation and comfort. Film measurements were made to assess the impact of the frame on transmission and surface dose. The reproducibility of the systems was assessed using electronic portal images. Both frame and shell systems are in clinical use. The frame weighs 0.6 kg and is well tolerated. It has a transmission of 92-96%, and fields which pass through it deliver surface doses of 58-82% of the dose at d(max), compared to 18% when no frame is present. However, the frame is constructed to maximise the availability of unobstructed beam directions. Reproducibility measurements for the frame showed a mean random error of 1.0+/-0.2mm in two dimensions (2D) and 1.4+/-0.7 mm in 3D. The mean systematic error in 3D was 2.2mm, and 90% of all overall 3D errors were less than 3.4mm. For the shell system, the mean 2D random error was 1.5+/-0.2mm. Two well-tolerated immobilisation devices have been developed for fractionated SCRT treatment of paediatric patients. A lightweight frame system gives a wide range of possible unobstructed beam directions, although beams that intersect the frame are not precluded, provided that output corrections are applied. A shell system allows the use of general anaesthesia. Both systems give reproducible immobilisation to complement the high-precision treatment delivery.

  17. Alcohol-impaired speed and accuracy of cognitive functions: a review of acute tolerance and recovery of cognitive performance.

    PubMed

    Schweizer, Tom A; Vogel-Sprott, Muriel

    2008-06-01

    Much research on the effects of a dose of alcohol has shown that motor skills recover from impairment as blood alcohol concentrations (BACs) decline and that acute tolerance to alcohol impairment can develop during the course of the dose. Comparable alcohol research on cognitive performance is sparse but has increased with the development of computerized cognitive tasks. This article reviews the results of recent research using these tasks to test the development of acute tolerance in cognitive performance and recovery from impairment during declining BACs. Results show that speed and accuracy do not necessarily agree in detecting cognitive impairment, and this mismatch most frequently occurs during declining BACs. Speed of cognitive performance usually recovers from impairment to drug-free levels during declining BACs, whereas alcohol-increased errors fail to diminish. As a consequence, speed of cognitive processing tends to develop acute tolerance, but no such tendency is shown in accuracy. This "acute protracted error" phenomenon has not previously been documented. The findings pose a challenge to the theory of alcohol tolerance on the basis of physiological adaptation and raise new research questions concerning the independence of speed and accuracy of cognitive processes, as well as hemispheric lateralization of alcohol effects. The occurrence of alcohol-induced protracted cognitive errors long after speed returned to normal is identified as a potential threat to the safety of social drinkers that requires urgent investigation.

  18. How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models.

    PubMed

    Francq, Bernard G; Govaerts, Bernadette

    2016-06-30

    Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Teaching Tolerance Magazine, 2003.

    ERIC Educational Resources Information Center

    Carnes, Jim, Ed.

    2003-01-01

    This magazine provides teachers with classroom learning materials to help children learn to be tolerant with others. Articles in the magazine are: "A Standard to Sustain" (Mary M. Harrison); "Let's Just Play" (Janet Schmidt); "Who's Helen Keller?" (Ruth Shagoury Hubbard); "Margins of Error" (Joe Parsons);…

  20. The ground-truth problem for satellite estimates of rain rate

    NASA Technical Reports Server (NTRS)

    North, Gerald R.; Valdes, Juan B.; Eunho, HA; Shen, Samuel S. P.

    1994-01-01

    In this paper a scheme is proposed to use a point raingage to compare contemporaneous measurements of rain rate from a single-field-of-view (FOV) estimate based on a satellite remote sensor such as a microwave radiometer. Even in the ideal case the measurements are different because one is at a point and the other is an area average over the field of view. Also the point gage will be located randomly inside the field of view on different overpasses. A space-time spectral formalism is combined with a simple stochastic rain field to find the mean-square deviations between the two systems. It is found that by combining about 60 visits of the satellite to the ground-truth site, the expected error can be reduced to about 10% of the standard deviation of the fluctuations of the systems alone. This seems to be a useful level of tolerance in terms of isolating and evaluating typical biases that might be contaminating retrieval algorithms.

  1. Novel spot size converter for coupling standard single mode fibers to SOI waveguides

    NASA Astrophysics Data System (ADS)

    Sisto, Marco Michele; Fisette, Bruno; Paultre, Jacques-Edmond; Paquet, Alex; Desroches, Yan

    2016-03-01

    We have designed and numerically simulated a novel spot size converter for coupling standard single mode fibers with 10.4μm mode field diameter to 500nm × 220nm SOI waveguides. Simulations based on the eigenmode expansion method show a coupling loss of 0.4dB at 1550nm for the TE mode at perfect alignment. The alignment tolerance on the plane normal to the fiber axis is evaluated at +/-2.2μm for <=1dB excess loss, which is comparable to the alignment tolerance between two butt-coupled standard single mode fibers. The converter is based on a cross-like arrangement of SiOxNy waveguides immersed in a 12μm-thick SiO2 cladding region deposited on top of the SOI chip. The waveguides are designed to collectively support a single degenerate mode for TE and TM polarizations. This guided mode features a large overlap to the LP01 mode of standard telecom fibers. Along the spot size converter length (450μm), the mode is first gradually confined in a single SiOxNy waveguide by tapering its width. Then, the mode is adiabatically coupled to a SOI waveguide underneath the structure through a SOI inverted taper. The shapes of SiOxNy and SOI tapers are optimized to minimize coupling loss and structure length, and to ensure adiabatic mode evolution along the structure, thus improving the design robustness to fabrication process errors. A tolerance analysis based on conservative microfabrication capabilities suggests that coupling loss penalty from fabrication errors can be maintained below 0.3dB. The proposed spot size converter is fully compliant to industry standard microfabrication processes available at INO.

  2. Analog-digital simulation of transient-induced logic errors and upset susceptibility of an advanced control system

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Choi, G.; Iyer, R. K.

    1990-01-01

    A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.

  3. Simulation of Dose to Surrounding Normal Structures in Tangential Breast Radiotherapy Due to Setup Error

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhakar, Ramachandran; Department of Nuclear Medicine, All India Institute of Medical Sciences, New Delhi; Department of Radiology, All India Institute of Medical Sciences, New Delhi

    Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL),more » contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues.« less

  4. Practical considerations for a second-order directional hearing aid microphone system

    NASA Astrophysics Data System (ADS)

    Thompson, Stephen C.

    2003-04-01

    First-order directional microphone systems for hearing aids have been available for several years. Such a system uses two microphones and has a theoretical maximum free-field directivity index (DI) of 6.0 dB. A second-order microphone system using three microphones could provide a theoretical increase in free-field DI to 9.5 dB. These theoretical maximum DI values assume that the microphones have exactly matched sensitivities at all frequencies of interest. In practice, the individual microphones in the hearing aid always have slightly different sensitivities. For the small microphone separation necessary to fit in a hearing aid, these sensitivity matching errors degrade the directivity from the theoretical values, especially at low frequencies. This paper shows that, for first-order systems the directivity degradation due to sensitivity errors is relatively small. However, for second-order systems with practical microphone sensitivity matching specifications, the directivity degradation below 1 kHz is not tolerable. A hybrid order directive system is proposed that uses first-order processing at low frequencies and second-order directive processing at higher frequencies. This hybrid system is suggested as an alternative that could provide improved directivity index in the frequency regions that are important to speech intelligibility.

  5. Assessing tropical rainforest growth traits: Data - Model fusion in the Congo basin and beyond.

    NASA Astrophysics Data System (ADS)

    Pietsch, S.

    2016-12-01

    Virgin forest ecosystems resemble the key reference level for natural tree growth dynamics. The mosaic cycle concept describes such dynamics as local disequilibria driven by patch level succession cycles of breakdown, regeneration, juvenescence and old growth. These cycles, however, may involve different traits of light demanding and shade tolerant species assemblies. In this work a data model fusion concept will be introduced to assess the differences in growth dynamics of the mosaic cycle of the Western Congolian Lowland Rainforest ecosystem. Field data from 34 forest patches located in an ice age forest refuge, recently pinpointed to the ground and still devoid of direct human impact up to today - resemble the data base. A 3D error assessment procedure versus BGC model simulations for the 34 patches revealed two different growth dynamics, consistent with observed growth traits of pioneer and late succession species assemblies of the Western Congolian Lowland rainforest. An application of the same procedure to Central American Pacific rainforests confirms the strength of the 3D error field data model fusion concept to assess different growth traits of the mosaic cycle of natural forest dynamics.

  6. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  7. KAPAO Prime: Design and Simulation

    NASA Astrophysics Data System (ADS)

    McGonigle, Lorcan

    2012-11-01

    KAPAO (KAPAO A Pomona Adaptive Optics instrument) is a dual-band natural guide star adaptive optics system designed to measure and remove atmospheric aberration from Pomona College's telescope atop Table Mountain. We present here, the final optical system, referred to as Prime, designed in Zemax Optical Design Software. Prime is characterized by diffraction limited imaging over the full 73'' field of view of our Andor Camera at f/33 as well as for our NIR Xenics camera at f/50. In Zemax, tolerances of 1% on OAP focal length and off-axis distance were shown to contribute an additional 4 nm of wavefront error (98% confidence) over the field of view of the Andor camera; the contribution from surface irregularity was determined analytically to be 40nm for OAPs specified to l/10 surface irregularity. Modeling of the temperature deformation of the breadboard in SolidWorks revealed 70 micron contractions along the edges of the board for a decrease of 75 F; when applied to OAP positions such displacements from the optimal layout are predicted to contribute an additional 20 nanometers of wavefront error. Flexure modeling of the breadboard due to gravity is on-going. We hope to begin alignment and testing of ``Prime'' in Q1 2013.

  8. Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -

    NASA Technical Reports Server (NTRS)

    Chen, Paul Peichuan

    1993-01-01

    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.

  9. Linewidth-tolerant 10-Gbit/s 16-QAM transmission using a pilot-carrier based phase-noise cancelling technique.

    PubMed

    Nakamura, Moriya; Kamio, Yukiyoshi; Miyazaki, Tetsuya

    2008-07-07

    We experimentally demonstrated linewidth-tolerant 10-Gbit/s (2.5-Gsymbol/s) 16-quadrature amplitude modulation (QAM) by using a distributed-feedback laser diode (DFB-LD) with a linewidth of 30 MHz. Error-free operation, a bit-error rate (BER) of <10(-9) was achieved in transmission over 120 km of standard single mode fiber (SSMF) without any dispersion compensation. The phase-noise canceling capability provided by a pilot-carrier and standard electronic pre-equalization to suppress inter-symbol interference (ISI) gave clear 16-QAM constellations and floor-less BER characteristics. We evaluated the BER characteristics by real-time measurement of six (three different thresholds for each I- and Q-component) symbol error rates (SERs) with simultaneous constellation observation.

  10. Preservation of a T-Invariant Reductionist Scaffold in "effective" Intrinsically Irreversible Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Ne'Eman, Yuval

    2003-08-01

    The recently developed Irreversible Quantum Mechanics formalism describes physical reality both at the statistical and the particle levels and voices have been heard suggesting that it be used in fundamental physics. Two examples are sketched in which similar steps were taken and proved to be terrible errors: Aristotle's rejection of the vacuum because "nature does not tolerate it", replacing it by a law of force linear in velocity and Chew's rejection of Quantum Field Theory because "it is not unitary off-mass-shell". In Particle Physics, I suggest using the new representation as an "effective" picture without abandoning the canonical background.

  11. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  12. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response.

    PubMed

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

    2016-04-29

    In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system's response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor's optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section.

  13. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response

    PubMed Central

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

    2016-01-01

    In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system’s response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor’s optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section. PMID:27136562

  14. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    NASA Astrophysics Data System (ADS)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passarge, M; Fix, M K; Manser, P

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less

  16. Designing an algorithm to preserve privacy for medical record linkage with error-prone data.

    PubMed

    Pal, Doyel; Chen, Tingting; Zhong, Sheng; Khethavath, Praveen

    2014-01-20

    Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients' privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other's database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other's database.

  17. Analysis of Free-Space Coupling to Photonic Lanterns in the Presence of Tilt Errors

    DTIC Science & Technology

    2017-05-01

    Analysis of Free- Space Coupling to Photonic Lanterns in the Presence of Tilt Errors Timothy M. Yarnall, David J. Geisler, Curt M. Schieler...Massachusetts Avenue Cambridge, MA 02139, USA Abstract—Free space coupling to photonic lanterns is more tolerant to tilt errors and F -number mismatch than...these errors. I. INTRODUCTION Photonic lanterns provide a means for transitioning from the free space regime to the single-mode fiber (SMF) regime by

  18. 77 FR 60917 - Trinexapac-ethyl; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... ``hog, meat by-products'' in order to correct inadvertent errors in the final rule tolerance table for...'' is revised to ``hog, meat by-products.'' V. Statutory and Executive Order Reviews This final rule... alphabetical order an entry for ``Hog, meat by-products''. 0 iii. Revising the entries for ``Wheat, forage...

  19. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  20. Feasibility study on the verification of actual beam delivery in a treatment room using EPID transit dosimetry.

    PubMed

    Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun

    2014-12-04

    The aim of this study is to evaluate the ability of transit dosimetry using commercial treatment planning system (TPS) and an electronic portal imaging device (EPID) with simple calibration method to verify the beam delivery based on detection of large errors in treatment room. Twenty four fields of intensity modulated radiotherapy (IMRT) plans were selected from four lung cancer patients and used in the irradiation of an anthropomorphic phantom. The proposed method was evaluated by comparing the calculated dose map from TPS and EPID measurement on the same plane using a gamma index method with a 3% dose and 3 mm distance-to-dose agreement tolerance limit. In a simulation using a homogeneous plastic water phantom, performed to verify the effectiveness of the proposed method, the average passing rate of the transit dose based on gamma index was high enough, averaging 94.2% when there was no error during beam delivery. The passing rate of the transit dose for 24 IMRT fields was lower with the anthropomorphic phantom, averaging 86.8% ± 3.8%, a reduction partially due to the inaccuracy of TPS calculations for inhomogeneity. Compared with the TPS, the absolute value of the transit dose at the beam center differed by -0.38% ± 2.1%. The simulation study indicated that the passing rate of the gamma index was significantly reduced, to less than 40%, when a wrong field was erroneously irradiated to patient in the treatment room. This feasibility study suggested that transit dosimetry based on the calculation with commercial TPS and EPID measurement with simple calibration can provide information about large errors for treatment beam delivery.

  1. Reliability and coverage analysis of non-repairable fault-tolerant memory systems

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Carroll, B. D.

    1976-01-01

    A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.

  2. Analysis of typical fault-tolerant architectures using HARP

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Bechta Dugan, Joanne; Trivedi, Kishor S.; Rothmann, Elizabeth M.; Smith, W. Earl

    1987-01-01

    Difficulties encountered in the modeling of fault-tolerant systems are discussed. The Hybrid Automated Reliability Predictor (HARP) approach to modeling fault-tolerant systems is described. The HARP is written in FORTRAN, consists of nearly 30,000 lines of codes and comments, and is based on behavioral decomposition. Using the behavioral decomposition, the dependability model is divided into fault-occurrence/repair and fault/error-handling models; the characteristics and combining of these two models are examined. Examples in which the HARP is applied to the modeling of some typical fault-tolerant systems, including a local-area network, two fault-tolerant computer systems, and a flight control system, are presented.

  3. Automated error correction in IBM quantum computer and explicit generalization

    NASA Astrophysics Data System (ADS)

    Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.

    2018-06-01

    Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.

  4. Alignment control study for the solar optical telescope

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Analysis of the alignment and focus errors than can be tolerated, methods of sensing such errors, and mechanisms to make the necessary corrections were addressed. Alternate approaches and their relative merits were considered. The results of this study indicate that adequate alignment control can be achieved.

  5. Medical errors and uncertainty in primary healthcare: A comparative study of coping strategies among young and experienced GPs

    PubMed Central

    Kuikka, Liisa; Pitkälä, Kaisu

    2014-01-01

    Abstract Objective. To study coping differences between young and experienced GPs in primary care who experience medical errors and uncertainty. Design. Questionnaire-based survey (self-assessment) conducted in 2011. Setting. Finnish primary practice offices in Southern Finland. Subjects. Finnish GPs engaged in primary health care from two different respondent groups: young (working experience ≤ 5years, n = 85) and experienced (working experience > 5 years, n = 80). Main outcome measures. Outcome measures included experiences and attitudes expressed by the included participants towards medical errors and tolerance of uncertainty, their coping strategies, and factors that may influence (positively or negatively) sources of errors. Results. In total, 165/244 GPs responded (response rate: 68%). Young GPs expressed significantly more often fear of committing a medical error (70.2% vs. 48.1%, p = 0.004) and admitted more often than experienced GPs that they had committed a medical error during the past year (83.5% vs. 68.8%, p = 0.026). Young GPs were less prone to apologize to a patient for an error (44.7% vs. 65.0%, p = 0.009) and found, more often than their more experienced colleagues, on-site consultations and electronic databases useful for avoiding mistakes. Conclusion. Experienced GPs seem to better tolerate uncertainty and also seem to fear medical errors less than their young colleagues. Young and more experienced GPs use different coping strategies for dealing with medical errors. Implications. When GPs become more experienced, they seem to get better at coping with medical errors. Means to support these skills should be studied in future research. PMID:24914458

  6. Determining the refractive index and thickness of thin films from prism coupler measurements

    NASA Technical Reports Server (NTRS)

    Kirsch, S. T.

    1981-01-01

    A simple method of determining thin film parameters from mode indices measured using a prism coupler is described. The problem is reduced to doing two least squares straight line fits through measured mode indices vs effective mode number. The slope and y intercept of the line are simply related to the thickness and refractive index of film, respectively. The approach takes into account the correlation between as well as the uncertainty in the individual measurements from all sources of error to give precise error tolerances on the best fit values. Due to the precision of the tolerances, anisotropic films can be identified and characterized.

  7. Evaluation of the Sparton tight-tolerance AXBT

    NASA Technical Reports Server (NTRS)

    Boyd, Janice D.; Linzell, Robert S.

    1993-01-01

    Forty-six near-simultaneous pairs of conductivity - temperature - depth (CTD) and Sparton 'tight tolerance' air expendable bathythermograph (AXBT) temperature profiles were obtained in summer 1991 from a location in the Sargasso Sea. The data were analyzed to assess the temperature and depth accuracies of the Sparton AXBTs. The tight-tolerance criterion was not achieved using the manufacturer's equations but may have been achieved using customized equations computed from the CTD data. The temperature data from the customized equations had a one standard deviation error of 0.13 C. A customized elapsed fall time-to-depth conversion equation was found to be z = 1.620t - 2.2384 x 10(exp -4) t(exp 2) + 1.291 x 10(exp -7) t(exp 3), with z the depth in meters and t the elapsed fall time after probe release in seconds. The standard deviation of the depth error was about 5 m; a rule of thumb for estimating maximum bounds on the depth error below 100 m could be expressed as +/-2% of depth or +/- 10 m, whichever is greater. This equation gave greater depth accuracy than either the manufacturer's supplied equation or the navy standard equation.

  8. Elucidation of cross-species proteomic effects in human and hominin bone proteome identification through a bioinformatics experiment.

    PubMed

    Welker, F

    2018-02-20

    The study of ancient protein sequences is increasingly focused on the analysis of older samples, including those of ancient hominins. The analysis of such ancient proteomes thereby potentially suffers from "cross-species proteomic effects": the loss of peptide and protein identifications at increased evolutionary distances due to a larger number of protein sequence differences between the database sequence and the analyzed organism. Error-tolerant proteomic search algorithms should theoretically overcome this problem at both the peptide and protein level; however, this has not been demonstrated. If error-tolerant searches do not overcome the cross-species proteomic issue then there might be inherent biases in the identified proteomes. Here, a bioinformatics experiment is performed to test this using a set of modern human bone proteomes and three independent searches against sequence databases at increasing evolutionary distances: the human (0 Ma), chimpanzee (6-8 Ma) and orangutan (16-17 Ma) reference proteomes, respectively. Incorrectly suggested amino acid substitutions are absent when employing adequate filtering criteria for mutable Peptide Spectrum Matches (PSMs), but roughly half of the mutable PSMs were not recovered. As a result, peptide and protein identification rates are higher in error-tolerant mode compared to non-error-tolerant searches but did not recover protein identifications completely. Data indicates that peptide length and the number of mutations between the target and database sequences are the main factors influencing mutable PSM identification. The error-tolerant results suggest that the cross-species proteomics problem is not overcome at increasing evolutionary distances, even at the protein level. Peptide and protein loss has the potential to significantly impact divergence dating and proteome comparisons when using ancient samples as there is a bias towards the identification of conserved sequences and proteins. Effects are minimized between moderately divergent proteomes, as indicated by almost complete recovery of informative positions in the search against the chimpanzee proteome (≈90%, 6-8 Ma). This provides a bioinformatic background to future phylogenetic and proteomic analysis of ancient hominin proteomes, including the future description of novel hominin amino acid sequences, but also has negative implications for the study of fast-evolving proteins in hominins, non-hominin animals, and ancient bacterial proteins in evolutionary contexts.

  9. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  10. High-Threshold Fault-Tolerant Quantum Computation with Analog Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Fukui, Kosuke; Tomita, Akihisa; Okamoto, Atsushi; Fujii, Keisuke

    2018-04-01

    To implement fault-tolerant quantum computation with continuous variables, the Gottesman-Kitaev-Preskill (GKP) qubit has been recognized as an important technological element. However, it is still challenging to experimentally generate the GKP qubit with the required squeezing level, 14.8 dB, of the existing fault-tolerant quantum computation. To reduce this requirement, we propose a high-threshold fault-tolerant quantum computation with GKP qubits using topologically protected measurement-based quantum computation with the surface code. By harnessing analog information contained in the GKP qubits, we apply analog quantum error correction to the surface code. Furthermore, we develop a method to prevent the squeezing level from decreasing during the construction of the large-scale cluster states for the topologically protected, measurement-based, quantum computation. We numerically show that the required squeezing level can be relaxed to less than 10 dB, which is within the reach of the current experimental technology. Hence, this work can considerably alleviate this experimental requirement and take a step closer to the realization of large-scale quantum computation.

  11. Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.

    2011-01-01

    Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.

  12. Advancing Technology for Starlight Suppression via an External Occulter

    NASA Technical Reports Server (NTRS)

    Kasdin, N. J.; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M.; Walkemeyer, P.; Bach, V.; Oakes, E.; Cady, E.; hide

    2011-01-01

    External occulters provide the starlight suppression needed for detecting and characterizing exoplanets with a much simpler telescope and instrument than is required for the equivalent performing coronagraph. In this paper we describe progress on our Technology Development for Exoplanet Missions project to design, manufacture, and measure a prototype occulter petal. We focus on the key requirement of manufacturing a precision petal while controlling its shape within precise tolerances. The required tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We discuss the deployable starshade design, representative error budget, thermal analysis, and prototype manufacturing. We also present our meteorology system and methodology for verifying that the petal shape meets the contrast requirement. Finally, we summarize the progress to date building the prototype petal.

  13. Study of optical design of Blu-ray pickup head system with a liquid crystal element.

    PubMed

    Fang, Yi-Chin; Yen, Chih-Ta; Hsu, Jui-Hsin

    2014-10-10

    This paper proposes a newly developed optical design and an active compensation method for a Blu-ray pickup head system with a liquid crystal (LC) element. Different from traditional pickup lens design, this new optical design delivers performance as good as the conventional one but has more room for tolerance control, which plays a role in antishaking devices, such as portable Blu-ray players. A hole-pattern electrode and LC optics with external voltage input were employed to generate a symmetric nonuniform electrical field in the LC layer that directs LC molecules into the appropriate gradient refractive index distribution, resulting in the convergence or divergence of specific light beams. LC optics deliver fast and, most importantly, active compensation through optical design when errors occur. Simulations and tolerance analysis were conducted using Code V software, including various tolerance analyses, such as defocus, tilt, and decenter, and their related compensations. Two distinct Blu-ray pickup head system designs were examined in this study. In traditional Blu-ray pickup head system designs, the aperture stop is always set on objective lenses. In the study, the aperture stop is on the LC lens as a newly developed lens. The results revealed that an optical design with aperture stop set on the LC lens as an active compensation device successfully eliminated up to 57% of coma aberration compared with traditional optical designs so that this pickup head lens design will have more space for tolerance control.

  14. Designing an Algorithm to Preserve Privacy for Medical Record Linkage With Error-Prone Data

    PubMed Central

    Pal, Doyel; Chen, Tingting; Khethavath, Praveen

    2014-01-01

    Background Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients’ privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. Objective To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. Methods To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. Results We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other’s database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Conclusions Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other’s database. PMID:25600786

  15. Fault tolerant architectures for integrated aircraft electronics systems, task 2

    NASA Technical Reports Server (NTRS)

    Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.

    1984-01-01

    The architectural basis for an advanced fault tolerant on-board computer to succeed the current generation of fault tolerant computers is examined. The network error tolerant system architecture is studied with particular attention to intercluster configurations and communication protocols, and to refined reliability estimates. The diagnosis of faults, so that appropriate choices for reconfiguration can be made is discussed. The analysis relates particularly to the recognition of transient faults in a system with tasks at many levels of priority. The demand driven data-flow architecture, which appears to have possible application in fault tolerant systems is described and work investigating the feasibility of automatic generation of aircraft flight control programs from abstract specifications is reported.

  16. Combined methods of tolerance increasing for embedded SRAM

    NASA Astrophysics Data System (ADS)

    Shchigorev, L. A.; Shagurin, I. I.

    2016-10-01

    The abilities of combined use of different methods of fault tolerance increasing for SRAM such as error detection and correction codes, parity bits, and redundant elements are considered. Area penalties due to using combinations of these methods are investigated. Estimation is made for different configurations of 4K x 128 RAM memory block for 28 nm manufacturing process. Evaluation of the effectiveness of the proposed combinations is also reported. The results of these investigations can be useful for designing fault-tolerant “system on chips”.

  17. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    NASA Astrophysics Data System (ADS)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  18. A Fortran 77 computer code for damped least-squares inversion of Slingram electromagnetic anomalies over thin tabular conductors

    NASA Astrophysics Data System (ADS)

    Dondurur, Derman; Sarı, Coşkun

    2004-07-01

    A FORTRAN 77 computer code is presented that permits the inversion of Slingram electromagnetic anomalies to an optimal conductor model. Damped least-squares inversion algorithm is used to estimate the anomalous body parameters, e.g. depth, dip and surface projection point of the target. Iteration progress is controlled by maximum relative error value and iteration continued until a tolerance value was satisfied, while the modification of Marquardt's parameter is controlled by sum of the squared errors value. In order to form the Jacobian matrix, the partial derivatives of theoretical anomaly expression with respect to the parameters being optimised are calculated by numerical differentiation by using first-order forward finite differences. A theoretical and two field anomalies are inserted to test the accuracy and applicability of the present inversion program. Inversion of the field data indicated that depth and the surface projection point parameters of the conductor are estimated correctly, however, considerable discrepancies appeared on the estimated dip angles. It is therefore concluded that the most important factor resulting in the misfit between observed and calculated data is due to the fact that the theory used for computing Slingram anomalies is valid for only thin conductors and this assumption might have caused incorrect dip estimates in the case of wide conductors.

  19. Straightforward and accurate technique for post-coupler stabilization in drift tube linac structures

    NASA Astrophysics Data System (ADS)

    Khalvati, Mohammad Reza; Ramberger, Suitbert

    2016-04-01

    The axial electric field of Alvarez drift tube linacs (DTLs) is known to be susceptible to variations due to static and dynamic effects like manufacturing tolerances and beam loading. Post-couplers are used to stabilize the accelerating fields of DTLs against tuning errors. Tilt sensitivity and its slope have been introduced as measures for the stability right from the invention of post-couplers but since then the actual stabilization has mostly been done by tedious iteration. In the present article, the local tilt-sensitivity slope TSn' is established as the principal measure for stabilization instead of tilt sensitivity or some visual slope, and its significance is developed on the basis of an equivalent-circuit diagram of the DTL. Experimental and 3D simulation results are used to analyze its behavior and to define a technique for stabilization that allows finding the best post-coupler settings with just four tilt-sensitivity measurements. CERN's Linac4 DTL Tank 2 and Tank 3 have been stabilized successfully using this technique. The final tilt-sensitivity error has been reduced from ±100 %/MHz down to ±3 %/MHz for Tank 2 and down to ±1 %/MHz for Tank 3. Finally, an accurate procedure for tuning the structure using slug tuners is discussed.

  20. Consistent Tolerance Bounds for Statistical Distributions

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Assumption that sample comes from population with particular distribution is made with confidence C if data lie between certain bounds. These "confidence bounds" depend on C and assumption about distribution of sampling errors around regression line. Graphical test criteria using tolerance bounds are applied in industry where statistical analysis influences product development and use. Applied to evaluate equipment life.

  1. Superconducting quantum circuits at the surface code threshold for fault tolerance.

    PubMed

    Barends, R; Kelly, J; Megrant, A; Veitia, A; Sank, D; Jeffrey, E; White, T C; Mutus, J; Fowler, A G; Campbell, B; Chen, Y; Chen, Z; Chiaro, B; Dunsworth, A; Neill, C; O'Malley, P; Roushan, P; Vainsencher, A; Wenner, J; Korotkov, A N; Cleland, A N; Martinis, John M

    2014-04-24

    A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.

  2. Towards fault tolerant adiabatic quantum computation.

    PubMed

    Lidar, Daniel A

    2008-04-25

    I show how to protect adiabatic quantum computation (AQC) against decoherence and certain control errors, using a hybrid methodology involving dynamical decoupling, subsystem and stabilizer codes, and energy gaps. Corresponding error bounds are derived. As an example, I show how to perform decoherence-protected AQC against local noise using at most two-body interactions.

  3. The Effects of Observation Errors on the Attack Vulnerability of Complex Networks

    DTIC Science & Technology

    2012-11-01

    more detail, to construct a true network we select a topology (erdos- renyi (Erdos & Renyi , 1959), scale-free (Barabási & Albert, 1999), small world...Efficiency of Scale-Free Networks: Error and Attack Tolerance. Physica A, Volume 320, pp. 622-642. 6. Erdos, P. & Renyi , A., 1959. On Random Graphs, I

  4. Error threshold for color codes and random three-body Ising models.

    PubMed

    Katzgraber, Helmut G; Bombin, H; Martin-Delgado, M A

    2009-08-28

    We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation, and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random three-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p(c) = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities do not necessarily imply lower resistance to noise.

  5. Rainfall prediction with backpropagation method

    NASA Astrophysics Data System (ADS)

    Wahyuni, E. G.; Fauzan, L. M. F.; Abriyani, F.; Muchlis, N. F.; Ulfa, M.

    2018-03-01

    Rainfall is an important factor in many fields, such as aviation and agriculture. Although it has been assisted by technology but the accuracy can not reach 100% and there is still the possibility of error. Though current rainfall prediction information is needed in various fields, such as agriculture and aviation fields. In the field of agriculture, to obtain abundant and quality yields, farmers are very dependent on weather conditions, especially rainfall. Rainfall is one of the factors that affect the safety of aircraft. To overcome the problems above, then it’s required a system that can accurately predict rainfall. In predicting rainfall, artificial neural network modeling is applied in this research. The method used in modeling this artificial neural network is backpropagation method. Backpropagation methods can result in better performance in repetitive exercises. This means that the weight of the ANN interconnection can approach the weight it should be. Another advantage of this method is the ability in the learning process adaptively and multilayer owned on this method there is a process of weight changes so as to minimize error (fault tolerance). Therefore, this method can guarantee good system resilience and consistently work well. The network is designed using 4 input variables, namely air temperature, air humidity, wind speed, and sunshine duration and 3 output variables ie low rainfall, medium rainfall, and high rainfall. Based on the research that has been done, the network can be used properly, as evidenced by the results of the prediction of the system precipitation is the same as the results of manual calculations.

  6. Discovery of error-tolerant biclusters from noisy gene expression data.

    PubMed

    Gupta, Rohit; Rao, Navneet; Kumar, Vipin

    2011-11-24

    An important analysis performed on microarray gene-expression data is to discover biclusters, which denote groups of genes that are coherently expressed for a subset of conditions. Various biclustering algorithms have been proposed to find different types of biclusters from these real-valued gene-expression data sets. However, these algorithms suffer from several limitations such as inability to explicitly handle errors/noise in the data; difficulty in discovering small bicliusters due to their top-down approach; inability of some of the approaches to find overlapping biclusters, which is crucial as many genes participate in multiple biological processes. Association pattern mining also produce biclusters as their result and can naturally address some of these limitations. However, traditional association mining only finds exact biclusters, which limits its applicability in real-life data sets where the biclusters may be fragmented due to random noise/errors. Moreover, as they only work with binary or boolean attributes, their application on gene-expression data require transforming real-valued attributes to binary attributes, which often results in loss of information. Many past approaches have tried to address the issue of noise and handling real-valued attributes independently but there is no systematic approach that addresses both of these issues together. In this paper, we first propose a novel error-tolerant biclustering model, 'ET-bicluster', and then propose a bottom-up heuristic-based mining algorithm to sequentially discover error-tolerant biclusters directly from real-valued gene-expression data. The efficacy of our proposed approach is illustrated by comparing it with a recent approach RAP in the context of two biological problems: discovery of functional modules and discovery of biomarkers. For the first problem, two real-valued S.Cerevisiae microarray gene-expression data sets are used to demonstrate that the biclusters obtained from ET-bicluster approach not only recover larger set of genes as compared to those obtained from RAP approach but also have higher functional coherence as evaluated using the GO-based functional enrichment analysis. The statistical significance of the discovered error-tolerant biclusters as estimated by using two randomization tests, reveal that they are indeed biologically meaningful and statistically significant. For the second problem of biomarker discovery, we used four real-valued Breast Cancer microarray gene-expression data sets and evaluate the biomarkers obtained using MSigDB gene sets. The results obtained for both the problems: functional module discovery and biomarkers discovery, clearly signifies the usefulness of the proposed ET-bicluster approach and illustrate the importance of explicitly incorporating noise/errors in discovering coherent groups of genes from gene-expression data.

  7. [Positional accuracy and quality assurance of Backup JAWs required for volumetric modulated arc therapy].

    PubMed

    Tatsumi, Daisaku; Nakada, Ryosei; Ienaga, Akinori; Yomoda, Akane; Inoue, Makoto; Ichida, Takao; Hosono, Masako

    2012-01-01

    The tolerance of the Backup diaphragm (Backup JAW) setting in Elekta linac was specified as 2 mm according to the AAPM TG-142 report. However, the tolerance and the quality assurance procedure for volumetric modulated arc therapy (VMAT) was not provided. This paper describes positional accuracy and quality assurance procedure of the Backup JAWs required for VMAT. It was found that a gap-width error of the Backup JAW by a sliding window test needed to be less than 1.5 mm for prostate VMAT delivery. It was also confirmed that the gap-widths had been maintained with an error of 0.2 mm during the past one year.

  8. Holonomic surface codes for fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiang; Devitt, Simon J.; You, J. Q.; Nori, Franco

    2018-02-01

    Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.

  9. Effect of neoclassical toroidal viscosity on error-field penetration thresholds in tokamak plasmas.

    PubMed

    Cole, A J; Hegna, C C; Callen, J D

    2007-08-10

    A model for field-error penetration is developed that includes nonresonant as well as the usual resonant field-error effects. The nonresonant components cause a neoclassical toroidal viscous torque that keeps the plasma rotating at a rate comparable to the ion diamagnetic frequency. The new theory is used to examine resonant error-field penetration threshold scaling in Ohmic tokamak plasmas. Compared to previous theoretical results, we find the plasma is less susceptible to error-field penetration and locking, by a factor that depends on the nonresonant error-field amplitude.

  10. KAPAO Prime: Design and Simulation

    NASA Astrophysics Data System (ADS)

    McGonigle, Lorcan; Choi, P. I.; Severson, S. A.; Spjut, E.

    2013-01-01

    KAPAO (KAPAO A Pomona Adaptive Optics instrument) is a dual-band natural guide star adaptive optics system designed to measure and remove atmospheric aberration over UV-NIR wavelengths from Pomona College’s telescope atop Table Mountain. We present here, the final optical system, KAPAO Prime, designed in Zemax Optical Design Software that uses custom off-axis paraboloid mirrors (OAPs) to manipulate light appropriately for a Shack-Hartman wavefront sensor, deformable mirror, and science cameras. KAPAO Prime is characterized by diffraction limited imaging over the full 81” field of view of our optical camera at f/33 as well as over the smaller field of view of our NIR camera at f/50. In Zemax, tolerances of 1% on OAP focal length and off-axis distance were shown to contribute an additional 4 nm of wavefront error (98% confidence) over the field of view of our optical camera; the contribution from surface irregularity was determined analytically to be 40nm for OAPs specified to λ/10 surface irregularity (632.8nm). Modeling of the temperature deformation of the breadboard in SolidWorks revealed 70 micron contractions along the edges of the board for a decrease of 75°F when applied to OAP positions such displacements from the optimal layout are predicted to contribute an additional 20 nanometers of wavefront error. Flexure modeling of the breadboard due to gravity is on-going. We hope to begin alignment and testing of KAPAO Prime in Q1 2013.

  11. An Efficient Silent Data Corruption Detection Method with Error-Feedback Control and Even Sampling for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Berrocal, Eduardo; Cappello, Franck

    The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less

  12. Induction and transmission of tolerance to the synthetic pesticide emamectin benzoate in field and laboratory populations of diamondback moth.

    PubMed

    Rahman, M M; Baker, G; Powis, K J; Roush, R T; Schmidt, O

    2010-08-01

    Field surveys of pest insect pest populations in agroecosystems reveal low but significant levels of tolerance to synthetic and biological pesticides but fail to uncover resistance alleles in test crosses. To study the potential of inducible mechanisms to generate tolerance to synthetic pesticides, we performed baseline susceptibility studies in field and laboratory populations of diamondback moth, Plutella xylostella (L.), to commercial formulations of emamectin benzoate. Pesticide exposure in the field caused elevated levels of tolerance, which decreased in field-collected populations after maintaining insects with pesticide-free diet in the laboratory. Because no significant resistance alleles were identified in back-crossed individuals, the observed increase in tolerance was probably not based on preexisting recessive resistance mechanisms in the population. Instead, the genetic analysis after five and 12 generations is compatible with a transient up-regulation of an immune and metabolic status in tolerant insects that can be transmitted to offspring by a maternal effect. Although the epigenetic effects contributed to incremental increases in tolerance in the first five generations, other resistance mechanisms that are transmitted genetically predominate after 12 generations of increased exposure to the pesticide.

  13. Impact of shorter wavelengths on optical quality for laws

    NASA Technical Reports Server (NTRS)

    Wissinger, Alan B.; Noll, Robert J.; Tsacoyeanes, James G.; Tausanovitch, Jeanette R.

    1993-01-01

    This study explores parametrically as a function of wavelength the degrading effects of several common optical aberrations (defocus, astigmatism, wavefront tilts, etc.), using the heterodyne mixing efficiency factor as the merit function. A 60 cm diameter aperture beam expander with an expansion ratio of 15:1 and a primary mirror focal ratio of f/2 was designed for the study. An HDOS copyrighted analysis program determined the value of merit function for various optical misalignments. With sensitivities provided by the analysis, preliminary error budget and tolerance allocations were made for potential optical wavefront errors and boresight errors during laser shot transit time. These were compared with the baseline l.5 m CO2 LAWS and the optical fabrication state of the art (SOA) as characterized by the Hubble Space Telescope. Reducing wavelength and changing optical design resulted in optical quality tolerances within the SOA both at 2 and 1 micrometers. However, advanced sensing and control devices would be necessary to maintain on-orbit alignment. Optical tolerance for maintaining boresight stability would have to be tightened by a factor of 1.8 for a 2 micrometers system and by 3.6 for a 1 micrometers system relative to the baseline CO2 LAWS. Available SOA components could be used for operation at 2 micrometers but operation at 1 micrometers does not appear feasible.

  14. Fault-Tolerant Software-Defined Radio on Manycore

    NASA Technical Reports Server (NTRS)

    Ricketts, Scott

    2015-01-01

    Software-defined radio (SDR) platforms generally rely on field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), but such architectures require significant software development. In addition, application demands for radiation mitigation and fault tolerance exacerbate programming challenges. MaXentric Technologies, LLC, has developed a manycore-based SDR technology that provides 100 times the throughput of conventional radiationhardened general purpose processors. Manycore systems (30-100 cores and beyond) have the potential to provide high processing performance at error rates that are equivalent to current space-deployed uniprocessor systems. MaXentric's innovation is a highly flexible radio, providing over-the-air reconfiguration; adaptability; and uninterrupted, real-time, multimode operation. The technology is also compliant with NASA's Space Telecommunications Radio System (STRS) architecture. In addition to its many uses within NASA communications, the SDR can also serve as a highly programmable research-stage prototyping device for new waveforms and other communications technologies. It can also support noncommunication codes on its multicore processor, collocated with the communications workload-reducing the size, weight, and power of the overall system by aggregating processing jobs to a single board computer.

  15. Preliminary design of the redundant software experiment

    NASA Technical Reports Server (NTRS)

    Campbell, Roy; Deimel, Lionel; Eckhardt, Dave, Jr.; Kelly, John; Knight, John; Lauterbach, Linda; Lee, Larry; Mcallister, Dave; Mchugh, John

    1985-01-01

    The goal of the present experiment is to characterize the fault distributions of highly reliable software replicates, constructed using techniques and environments which are similar to those used in comtemporary industrial software facilities. The fault distributions and their effect on the reliability of fault tolerant configurations of the software will be determined through extensive life testing of the replicates against carefully constructed randomly generated test data. Each detected error will be carefully analyzed to provide insight in to their nature and cause. A direct objective is to develop techniques for reducing the intensity of coincident errors, thus increasing the reliability gain which can be achieved with fault tolerance. Data on the reliability gains realized, and the cost of the fault tolerant configurations can be used to design a companion experiment to determine the cost effectiveness of the fault tolerant strategy. Finally, the data and analysis produced by this experiment will be valuable to the software engineering community as a whole because it will provide a useful insight into the nature and cause of hard to find, subtle faults which escape standard software engineering validation techniques and thus persist far into the software life cycle.

  16. Lexical architecture based on a hierarchy of codes for high-speed string correction

    NASA Astrophysics Data System (ADS)

    de Bertrand de Beuvron, Francois; Trigano, Philippe

    1992-03-01

    AI systems for the general public have to be really tolerant to errors. These errors could be of several kinds: typographic, phonetic, grammatical, or semantic. A special lexical dictionary architecture has been designed to deal with the first two. It extends the hierarchical file method of E. Tanaka and Y. Kojima.

  17. A theoretical basis for the analysis of redundant software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques fault-tolerant software, is an understanding of the impact of multiple-joint occurrences of coincident errors. A theoretical basis for the study of redundant software is developed which provides a probabilistic framework for empirically evaluating the effectiveness of the general (N-Version) strategy when component versions are subject to coincident errors, and permits an analytical study of the effects of these errors. The basic assumptions of the model are: (1) independently designed software components are chosen in a random sample; and (2) in the user environment, the system is required to execute on a stationary input series. The intensity of coincident errors, has a central role in the model. This function describes the propensity to introduce design faults in such a way that software components fail together when executing in the user environment. The model is used to give conditions under which an N-Version system is a better strategy for reducing system failure probability than relying on a single version of software. A condition which limits the effectiveness of a fault-tolerant strategy is studied, and it is posted whether system failure probability varies monotonically with increasing N or whether an optimal choice of N exists.

  18. The use of automatic programming techniques for fault tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Wild, C.

    1985-01-01

    It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.

  19. About problematic peculiarities of Fault Tolerance digital regulation organization

    NASA Astrophysics Data System (ADS)

    Rakov, V. I.; Zakharova, O. V.

    2018-05-01

    The solution of problems concerning estimation of working capacity of regulation chains and possibilities of preventing situations of its violation in three directions are offered. The first direction is working out (creating) the methods of representing the regulation loop (circuit) by means of uniting (combining) diffuse components and forming algorithmic tooling for building predicates of serviceability assessment separately for the components and the for regulation loops (circuits, contours) in general. The second direction is creating methods of Fault Tolerance redundancy in the process of complex assessment of current values of control actions, closure errors and their regulated parameters. The third direction is creating methods of comparing the processes of alteration (change) of control actions, errors of closure and regulating parameters with their standard models or their surroundings. This direction allows one to develop methods and algorithmic tool means, aimed at preventing loss of serviceability and effectiveness of not only a separate digital regulator, but also the whole complex of Fault Tolerance regulation.

  20. Defect tolerance in resistor-logic demultiplexers for nanoelectronics.

    PubMed

    Kuekes, Philip J; Robinett, Warren; Williams, R Stanley

    2006-05-28

    Since defect rates are expected to be high in nanocircuitry, we analyse the performance of resistor-based demultiplexers in the presence of defects. The defects observed to occur in fabricated nanoscale crossbars are stuck-open, stuck-closed, stuck-short, broken-wire, and adjacent-wire-short defects. We analyse the distribution of voltages on the nanowire output lines of a resistor-logic demultiplexer, based on an arbitrary constant-weight code, when defects occur. These analyses show that resistor-logic demultiplexers can tolerate small numbers of stuck-closed, stuck-open, and broken-wire defects on individual nanowires, at the cost of some degradation in the circuit's worst-case voltage margin. For stuck-short and adjacent-wire-short defects, and for nanowires with too many defects of the other types, the demultiplexer can still achieve error-free performance, but with a smaller set of output lines. This design thus has two layers of defect tolerance: the coding layer improves the yield of usable output lines, and an avoidance layer guarantees that error-free performance is achieved.

  1. Designing and Building to ``Impossible'' Tolerances for Vibration Sensitive Equipment

    NASA Astrophysics Data System (ADS)

    Hertlein, Bernard H.

    2003-03-01

    As the precision and production capabilities of modern machines and factories increase, our expectations of them rise commensurately. Facility designers and engineers find themselves increasingly involved with measurement needs and design tolerances that were almost unthinkable a few years ago. An area of expertise that demonstrates this very clearly is the field of vibration measurement and control. Magnetic Resonance Imaging, Semiconductor manufacturing, micro-machining, surgical microscopes — These are just a few examples of equipment or techniques that need an extremely stable vibration environment. The challenge to architects, engineers and contractors is to provide that level of stability without undue cost or sacrificing the aesthetics and practicality of a structure. In addition, many facilities have run out of expansion room, so the design is often hampered by the need to reuse all or part of an existing structure, or to site vibration-sensitive equipment close to an existing vibration source. High resolution measurements and nondestructive testing techniques have proven to be invaluable additions to the engineer's toolbox in meeting these challenges. The author summarizes developments in this field over the last fifteen years or so, and lists some common errors of design and construction that can cost a lot of money in retrofit if missed, but can easily be avoided with a little foresight, an appropriate testing program and a carefully thought out checklist.

  2. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  3. 76 FR 67315 - Supplemental Nutrition Assistance Program: Quality Control Error Tolerance Threshold

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-01

    ...This direct final rule is amending the Quality Control (QC) review error threshold in our regulations from $25.00 to $50.00. The purpose for raising the QC error threshold is to make permanent the temporary threshold change that was required by the American Recovery and Reinvestment Act of 2008. This change does not have an impact on the public. The QC system measures the accuracy of the eligibility system for the Supplemental Nutrition Assistance Program (SNAP).

  4. Disjointness of Stabilizer Codes and Limitations on Fault-Tolerant Logical Gates

    NASA Astrophysics Data System (ADS)

    Jochym-O'Connor, Tomas; Kubica, Aleksander; Yoder, Theodore J.

    2018-04-01

    Stabilizer codes are among the most successful quantum error-correcting codes, yet they have important limitations on their ability to fault tolerantly compute. Here, we introduce a new quantity, the disjointness of the stabilizer code, which, roughly speaking, is the number of mostly nonoverlapping representations of any given nontrivial logical Pauli operator. The notion of disjointness proves useful in limiting transversal gates on any error-detecting stabilizer code to a finite level of the Clifford hierarchy. For code families, we can similarly restrict logical operators implemented by constant-depth circuits. For instance, we show that it is impossible, with a constant-depth but possibly geometrically nonlocal circuit, to implement a logical non-Clifford gate on the standard two-dimensional surface code.

  5. Issues on human acceleration tolerance after long-duration space flights

    NASA Technical Reports Server (NTRS)

    Kumar, K. Vasantha; Norfleet, William T.

    1992-01-01

    This report reviewed the literature on human tolerance to acceleration at 1 G and changes in tolerance after exposure to hypogravic fields. It was found that human tolerance decreased after exposure to hypokinetic and hypogravic fields, but the magnitude of such reduction ranged from 0 to 30 percent for plateau G forces and 30 to 70 percent for time tolerance on sustained G forces. A logistic regression model of the probability of individuals with 25 percent reduction in +Gz tolerance after 1 to 41 days of hypogravic exposures was constructed. The estimated values from the model showed a good correlation with the observed data. A brief review of the need for in-flight centrifuge during long-duration missions was also presented. Review of the available data showed that the use of countermeasures (such as anti-G suits, periodic acceleration, and exercise) reduced the decrement in acceleration tolerance after long-duration space flights. Areas of further research include quantification of the effect of countermeasures on tolerance, and methods to augment tolerance during and after exposures to hypogravic fields. Such data are essential for planning long-duration human missions.

  6. Integrated analysis of error detection and recovery

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1985-01-01

    An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.

  7. Identification of drought-responsive genes in a drought-tolerant cotton (Gossypium hirsutum L.) cultivar under reduced irrigation field conditions and development of candidate gene markers for drought tolerance

    USDA-ARS?s Scientific Manuscript database

    Cotton productivity is affected by water deficit, and little is known about the molecular basis of drought tolerance in cotton. In this study, microarray analysis was conducted to identify drought-responsive genes in the third topmost leaves of the field-grown drought-tolerant cotton (Gossypium hirs...

  8. Detailed Uncertainty Analysis of the ZEM-3 Measurement System

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.

  9. Making classical ground-state spin computing fault-tolerant.

    PubMed

    Crosson, I J; Bacon, D; Brown, K R

    2010-09-01

    We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error-free manner when working at nonzero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error-free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error-free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity.

  10. Influence of the ambient acceleration field upon acute acceleration tolerance in chickens

    NASA Technical Reports Server (NTRS)

    Smith, A. H.; Spangler, W. L.; Rhode, E. A.; Burton, R. R.

    1979-01-01

    The paper measured the acceleration tolerance of domestic fowl (Rhode Island Red cocks), acutely exposed to a 6 Gz field, as the time over which a normal heart rate can be maintained. This period of circulatory adjustment ends abruptly with pronounced bradycardia. For chickens which previously have been physiologically adapted to 2.5 -G field, the acute acceleration tolerance is greatly increased. The influence of the ambient acceleration field on the adjustment of the circulatory system appears to be a general phenomenon.

  11. Measurements of the toroidal torque balance of error field penetration locked modes

    DOE PAGES

    Shiraki, Daisuke; Paz-Soldan, Carlos; Hanson, Jeremy M.; ...

    2015-01-05

    Here, detailed measurements from the DIII-D tokamak of the toroidal dynamics of error field penetration locked modes under the influence of slowly evolving external fields, enable study of the toroidal torques on the mode, including interaction with the intrinsic error field. The error field in these low density Ohmic discharges is well known based on the mode penetration threshold, allowing resonant and non-resonant torque effects to be distinguished. These m/n = 2/1 locked modes are found to be well described by a toroidal torque balance between the resonant interaction with n = 1 error fields, and a viscous torque inmore » the electron diamagnetic drift direction which is observed to scale as the square of the perturbed field due to the island. Fitting to this empirical torque balance allows a time-resolved measurement of the intrinsic error field of the device, providing evidence for a time-dependent error field in DIII-D due to ramping of the Ohmic coil current.« less

  12. Evaluation of real-time data obtained from gravimetric preparation of antineoplastic agents shows medication errors with possible critical therapeutic impact: Results of a large-scale, multicentre, multinational, retrospective study.

    PubMed

    Terkola, R; Czejka, M; Bérubé, J

    2017-08-01

    Medication errors are a significant cause of morbidity and mortality especially with antineoplastic drugs, owing to their narrow therapeutic index. Gravimetric workflow software systems have the potential to reduce volumetric errors during intravenous antineoplastic drug preparation which may occur when verification is reliant on visual inspection. Our aim was to detect medication errors with possible critical therapeutic impact as determined by the rate of prevented medication errors in chemotherapy compounding after implementation of gravimetric measurement. A large-scale, retrospective analysis of data was carried out, related to medication errors identified during preparation of antineoplastic drugs in 10 pharmacy services ("centres") in five European countries following the introduction of an intravenous workflow software gravimetric system. Errors were defined as errors in dose volumes outside tolerance levels, identified during weighing stages of preparation of chemotherapy solutions which would not otherwise have been detected by conventional visual inspection. The gravimetric system detected that 7.89% of the 759 060 doses of antineoplastic drugs prepared at participating centres between July 2011 and October 2015 had error levels outside the accepted tolerance range set by individual centres, and prevented these doses from reaching patients. The proportion of antineoplastic preparations with deviations >10% ranged from 0.49% to 5.04% across sites, with a mean of 2.25%. The proportion of preparations with deviations >20% ranged from 0.21% to 1.27% across sites, with a mean of 0.71%. There was considerable variation in error levels for different antineoplastic agents. Introduction of a gravimetric preparation system for antineoplastic agents detected and prevented dosing errors which would not have been recognized with traditional methods and could have resulted in toxicity or suboptimal therapeutic outcomes for patients undergoing anticancer treatment. © 2017 The Authors. Journal of Clinical Pharmacy and Therapeutics Published by John Wiley & Sons Ltd.

  13. Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics

    NASA Astrophysics Data System (ADS)

    Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun

    2016-07-01

    The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.

  14. Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.

    1992-01-01

    The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.

  15. Study on fault-tolerant processors for advanced launch system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Liu, Jyh-Charn

    1990-01-01

    Issues related to the reliability of a redundant system with large main memory are addressed. The Fault-Tolerant Processor (FTP) for the Advanced Launch System (ALS) is used as a basis for the presentation. When the system is free of latent faults, the probability of system crash due to multiple channel faults is shown to be insignificant even when voting on the outputs of computing channels is infrequent. Using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing redundancy or the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by those CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs (with a very low hardware overhead) can be used to dramatically reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, two different schemes were developed to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.

  16. A Reduced-Order Model For Zero-Mass Synthetic Jet Actuators

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.; Vatsa, Veer S.

    2007-01-01

    Accurate details of the general performance of fluid actuators is desirable over a range of flow conditions, within some predetermined error tolerance. Designers typically model actuators with different levels of fidelity depending on the acceptable level of error in each circumstance. Crude properties of the actuator (e.g., peak mass rate and frequency) may be sufficient for some designs, while detailed information is needed for other applications (e.g., multiple actuator interactions). This work attempts to address two primary objectives. The first objective is to develop a systematic methodology for approximating realistic 3-D fluid actuators, using quasi-1-D reduced-order models. Near full fidelity can be achieved with this approach at a fraction of the cost of full simulation and only a modest increase in cost relative to most actuator models used today. The second objective, which is a direct consequence of the first, is to determine the approximate magnitude of errors committed by actuator model approximations of various fidelities. This objective attempts to identify which model (ranging from simple orifice exit boundary conditions to full numerical simulations of the actuator) is appropriate for a given error tolerance.

  17. A Test Methodology for Determining Space-Readiness of Xilinx SRAM-Based FPGA Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Graham, Paul S; Morgan, Keith S

    2008-01-01

    Using reconfigurable, static random-access memory (SRAM) based field-programmable gate arrays (FPGAs) for space-based computation has been an exciting area of research for the past decade. Since both the circuit and the circuit's state is stored in radiation-tolerant memory, both could be alterd by the harsh space radiation environment. Both the circuit and the circuit's state can be prote cted by triple-moduler redundancy (TMR), but applying TMR to FPGA user designs is often an error-prone process. Faulty application of TMR could cause the FPGA user circuit to output incorrect data. This paper will describe a three-tiered methodology for testing FPGA usermore » designs for space-readiness. We will describe the standard approach to testing FPGA user designs using a particle accelerator, as well as two methods using fault injection and a modeling tool. While accelerator testing is the current 'gold standard' for pre-launch testing, we believe the use of fault injection and modeling tools allows for easy, cheap and uniform access for discovering errors early in the design process.« less

  18. Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems

    NASA Astrophysics Data System (ADS)

    Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka

    2018-06-01

    One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.

  19. Effects of sulphur dioxide (SO2) on growth and flowering of SO2-tolerant and non-tolerant genotypes of Phleum pratense.

    PubMed

    Clapperton, M J; Reid, D M

    1994-01-01

    The objective of this study was to compare the growth and interaction of clipping and sulphur dioxide (SO(2)) exposure on SO(2)-tolerant and non-tolerant genotypes of Phleum pratense at two field sites along an SO(2)-concentration gradient. Sulphur-dioxide-tolerant and non-tolerant genotypes of Phleum pratense were identified from indigenous populations that had been collected along the same SO(2)-concentration gradient in southern Alberta, Canada. Physiological differences between the two genotypes were confirmed by supplying leaves with (14)CO(2) and examining the assimilate partitioning between the genotypes. For the field experiment, clones of each genotype and seedlings grown from commercial seed were planted at two different field sites along an SO(2)-emission gradient. There were no differences in growth between the genotypes at the two field sites after the first year except that the SO(2)-tolerant clones had a greater percentage of root length colonised by vesicular-arbuscular (VA) mycorrhizal fungi. After the second growing season, there was a significant decrease in the number of inflorescences produced by plants exposed to SO(2), particularly by the non-tolerant genotype. The added stress of defoliation appeared to increase the sensitivity of flowering to SO(2), again particularly in the non-tolerant genotype. The results of the field study showed that flowering as opposed to vegetative plant growth was more sensitive to long-term low-concentration SO(2) exposure and that this sensitivity was compounded by the stress interaction of defoliation.

  20. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    PubMed Central

    Clark, Kevin B.

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987

  1. Citrus Huanglongbing tolerance in Australian Citrus Relatives, Microcitrus and Eremocirus

    USDA-ARS?s Scientific Manuscript database

    Tolerance, or resistance to citrus huanglongbing will be important as a long term solution for this disease. In a field trial conducted with over 1000 plants belonging to different genera in the sub-family Aurantioideae, we observed field tolerance in many Australian citrus relatives. To confirm the...

  2. Exploration of Two Training Paradigms Using Forced Induced Weight Shifting With the Tethered Pelvic Assist Device to Reduce Asymmetry in Individuals After Stroke: Case Reports.

    PubMed

    Bishop, Lauri; Khan, Moiz; Martelli, Dario; Quinn, Lori; Stein, Joel; Agrawal, Sunil

    2017-10-01

    Many robotic devices in rehabilitation incorporate an assist-as-needed haptic guidance paradigm to promote training. This error reduction model, while beneficial for skill acquisition, could be detrimental for long-term retention. Error augmentation (EA) models have been explored as alternatives. A robotic Tethered Pelvic Assist Device has been developed to study force application to the pelvis on gait and was used here to induce weight shift onto the paretic (error reduction) or nonparetic (error augmentation) limb during treadmill training. The purpose of these case reports is to examine effects of training with these two paradigms to reduce load force asymmetry during gait in two individuals after stroke (>6 mos). Participants presented with baseline gait asymmetry, although independent community ambulators. Participants underwent 1-hr trainings for 3 days using either the error reduction or error augmentation model. Outcomes included the Borg rating of perceived exertion scale for treatment tolerance and measures of force and stance symmetry. Both participants tolerated training. Force symmetry (measured on treadmill) improved from pretraining to posttraining (36.58% and 14.64% gains), however, with limited transfer to overground gait measures (stance symmetry gains of 9.74% and 16.21%). Training with the Tethered Pelvic Assist Device device proved feasible to improve force symmetry on the treadmill irrespective of training model. Future work should consider methods to increase transfer to overground gait.

  3. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  4. Commissioning of a PTW 34070 large-area plane-parallel ionization chamber for small field megavoltage photon dosimetry.

    PubMed

    Kupfer, Tom; Lehmann, Joerg; Butler, Duncan J; Ramanathan, Ganesan; Bailey, Tracy E; Franich, Rick D

    2017-11-01

    This study investigates a large-area plane-parallel ionization chamber (LAC) for measurements of dose-area product in water (DAP w ) in megavoltage (MV) photon fields. Uniformity of electrode separation of the LAC (PTW34070 Bragg Peak Chamber, sensitive volume diameter: 8.16 cm) was measured using high-resolution microCT. Signal dependence on angle α of beam incidence for square 6 MV fields of side length s = 20 cm and 1 cm was measured in air. Polarity and recombination effects were characterized in 6, 10, and 18 MV photons fields. To assess the lateral setup tolerance, scanned LAC profiles of a 1 × 1 cm 2 field were acquired. A 6 MV calibration coefficient, N D ,w, LAC , was determined in a field collimated by a 5 cm diameter stereotactic cone with known DAP w . Additional calibrations in 10 × 10 cm 2 fields at 6, 10, and 18 MV were performed. Electrode separation is uniform and agrees with specifications. Volume-averaging leads to a signal increase proportional to ~1/cos(α) in small fields. Correction factors for polarity and recombination range between 0.9986 to 0.9996 and 1.0007 to 1.0024, respectively. Off-axis displacement by up to 0.5 cm did not change the measured signal in a 1 × 1 cm 2 field. N D ,w, LAC was 163.7 mGy cm -2 nC -1 and differs by +3.0% from the coefficient derived in the 10 × 10 cm 2 6 MV field. Response in 10 and 18 MV fields increased by 1.0% and 2.7% compared to 6 MV. The LAC requires only small correction factors for DAP w measurements and shows little energy dependence. Lateral setup errors of 0.5 cm are tolerated in 1 × 1 cm 2 fields, but beam incidence must be kept as close to normal as possible. Calibration in 10 × 10 fields is not recommended because of the LAC's over-response. The accuracy of relative point-dose measurements in the field's periphery is an important limiting factor for the accuracy of DAP w measurements. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  5. Overcoming Species Boundaries in Peptide Identification with Bayesian Information Criterion-driven Error-tolerant Peptide Search (BICEPS)*

    PubMed Central

    Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno

    2012-01-01

    Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179

  6. DEPEND: A simulation-based environment for system level dependability analysis

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar; Iyer, Ravishankar K.

    1992-01-01

    The design and evaluation of highly reliable computer systems is a complex issue. Designers mostly develop such systems based on prior knowledge and experience and occasionally from analytical evaluations of simplified designs. A simulation-based environment called DEPEND which is especially geared for the design and evaluation of fault-tolerant architectures is presented. DEPEND is unique in that it exploits the properties of object-oriented programming to provide a flexible framework with which a user can rapidly model and evaluate various fault-tolerant systems. The key features of the DEPEND environment are described, and its capabilities are illustrated with a detailed analysis of a real design. In particular, DEPEND is used to simulate the Unix based Tandem Integrity fault-tolerance and evaluate how well it handles near-coincident errors caused by correlated and latent faults. Issues such as memory scrubbing, re-integration policies, and workload dependent repair times which affect how the system handles near-coincident errors are also evaluated. Issues such as the method used by DEPEND to simulate error latency and the time acceleration technique that provides enormous simulation speed up are also discussed. Unlike any other simulation-based dependability studies, the use of these approaches and the accuracy of the simulation model are validated by comparing the results of the simulations, with measurements obtained from fault injection experiments conducted on a production Tandem Integrity machine.

  7. An Investigation into Soft Error Detection Efficiency at Operating System Level

    PubMed Central

    Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894

  8. An investigation into soft error detection efficiency at operating system level.

    PubMed

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  9. Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.

    PubMed

    Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing

    2016-01-01

    The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.

  10. 75 FR 6576 - Exemption from the Requirement of a Tolerance; Technical Amendment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-10

    ... the requirement of a tolerance is established for residues of Aspergillus flavus NRRL 21882 on corn, field, forage; corn, field, grain; corn, field, stover; corn, field, aspirated grain fractions; corn, sweet, kernel plus cob with husk removed; corn, sweet, forage; corn, sweet, stover; corn, pop, grain...

  11. A comparison of hydroponic and soil-based screening methods to identify salt tolerance in the field in barley

    PubMed Central

    Tavakkoli, Ehsan; Fatehi, Foad; Rengasamy, Pichu; McDonald, Glenn K.

    2012-01-01

    Success in breeding crops for yield and other quantitative traits depends on the use of methods to evaluate genotypes accurately under field conditions. Although many screening criteria have been suggested to distinguish between genotypes for their salt tolerance under controlled environmental conditions, there is a need to test these criteria in the field. In this study, the salt tolerance, ion concentrations, and accumulation of compatible solutes of genotypes of barley with a range of putative salt tolerance were investigated using three growing conditions (hydroponics, soil in pots, and natural saline field). Initially, 60 genotypes of barley were screened for their salt tolerance and uptake of Na+, Cl–, and K+ at 150 mM NaCl and, based on this, a subset of 15 genotypes was selected for testing in pots and in the field. Expression of salt tolerance in saline solution culture was not a reliable indicator of the differences in salt tolerance between barley plants that were evident in saline soil-based comparisons. Significant correlations were observed in the rankings of genotypes on the basis of their grain yield production at a moderately saline field site and their relative shoot growth in pots at ECe 7.2 [Spearman’s rank correlation (rs)=0.79] and ECe 15.3 (rs=0.82) and the crucial parameter of leaf Na+ (rs=0.72) and Cl– (rs=0.82) concentrations at ECe 7.2 dS m−1. This work has established screening procedures that correlated well with grain yield at sites with moderate levels of soil salinity. This study also showed that both salt exclusion and osmotic tolerance are involved in salt tolerance and that the relative importance of these traits may differ with the severity of the salt stress. In soil, ion exclusion tended to be more important at low to moderate levels of stress but osmotic stress became more important at higher stress levels. Salt exclusion coupled with a synthesis of organic solutes were shown to be important components of salt tolerance in the tolerant genotypes and further field tests of these plants under stress conditions will help to verify their potential utility in crop-improvement programmes. PMID:22442423

  12. Hessian matrix approach for determining error field sensitivity to coil deviations

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi

    2018-05-01

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.

  13. Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1

    DTIC Science & Technology

    2016-03-01

    REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The

  14. Software Fault Tolerance: A Tutorial

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2000-01-01

    Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

  15. Research Supporting Satellite Communications Technology

    NASA Technical Reports Server (NTRS)

    Horan Stephen; Lyman, Raphael

    2005-01-01

    This report describes the second year of research effort under the grant Research Supporting Satellite Communications Technology. The research program consists of two major projects: Fault Tolerant Link Establishment and the design of an Auto-Configurable Receiver. The Fault Tolerant Link Establishment protocol is being developed to assist the designers of satellite clusters to manage the inter-satellite communications. During this second year, the basic protocol design was validated with an extensive testing program. After this testing was completed, a channel error model was added to the protocol to permit the effects of channel errors to be measured. This error generation was used to test the effects of channel errors on Heartbeat and Token message passing. The C-language source code for the protocol modules was delivered to Goddard Space Flight Center for integration with the GSFC testbed. The need for a receiver autoconfiguration capability arises when a satellite-to-ground transmission is interrupted due to an unexpected event, the satellite transponder may reset to an unknown state and begin transmitting in a new mode. During Year 2, we completed testing of these algorithms when noise-induced bit errors were introduced. We also developed and tested an algorithm for estimating the data rate, assuming an NRZ-formatted signal corrupted with additive white Gaussian noise, and we took initial steps in integrating both algorithms into the SDR test bed at GSFC.

  16. Improved silica-PLC Mach-Zehnder interferometer type optical switches with error dependence compensation of directional coupler

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Yi, Jia; Guo, Lijun; Liu, Peng; Hall, Trevor J.; Sun, DeGui

    2017-03-01

    For the most popular structure of planer lightwave circuit (PLC) 2×2 thermo-optic switches, Mach-Zehnder interferometer (MZI), a full range of splitting ratio errors of directional coupler (DC) are investigated. All the parameters determining the splitting ratio are the dimensions and the refractive indices of the waveguide core and cladding layers. In this work, the coherent relationships between the waveguide size and the refractive indices are analyzed and then the error compensation between the width and the refractive index of waveguide core, and the controllable effect of over clad layer refractive index error upon the MZI-type optical switch are all discovered with numerical calculation and BPM simulations. Then, an MZI-type 2×2 thermo-optic switch having a higher error tolerance is established with the efficient optimizations of all the 3 dB-DC parameters. As a result, for the symmetric MZI switch, an insertion loss of 1.5 dB and optical extinction ratio of over 20 dB are realized for the average tolerance of±5.0%. An asymmetric arm optical phase and unequal arm lengths is also employed to improve the uniformities of insertion loss. The agreements between the designs and the experiments are recognized, leading to a wide adoption of practical silica-PLC optical switch products.

  17. Algorithm-Based Fault Tolerance Integrated with Replication

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Rennels, David

    2008-01-01

    In a proposed approach to programming and utilization of commercial off-the-shelf computing equipment, a combination of algorithm-based fault tolerance (ABFT) and replication would be utilized to obtain high degrees of fault tolerance without incurring excessive costs. The basic idea of the proposed approach is to integrate ABFT with replication such that the algorithmic portions of computations would be protected by ABFT, and the logical portions by replication. ABFT is an extremely efficient, inexpensive, high-coverage technique for detecting and mitigating faults in computer systems used for algorithmic computations, but does not protect against errors in logical operations surrounding algorithms.

  18. Aperture tolerances for neutron-imaging systems in inertial confinement fusion.

    PubMed

    Ghilea, M C; Sangster, T C; Meyerhofer, D D; Lerche, R A; Disdier, L

    2008-02-01

    Neutron-imaging systems are being considered as an ignition diagnostic for the National Ignition Facility (NIF) [Hogan et al., Nucl. Fusion 41, 567 (2001)]. Given the importance of these systems, a neutron-imaging design tool is being used to quantify the effects of aperture fabrication and alignment tolerances on reconstructed neutron images for inertial confinement fusion. The simulations indicate that alignment tolerances of more than 1 mrad would introduce measurable features in a reconstructed image for both pinholes and penumbral aperture systems. These simulations further show that penumbral apertures are several times less sensitive to fabrication errors than pinhole apertures.

  19. Fault-tolerant linear optical quantum computing with small-amplitude coherent States.

    PubMed

    Lund, A P; Ralph, T C; Haselgrove, H L

    2008-01-25

    Quantum computing using two coherent states as a qubit basis is a proposed alternative architecture with lower overheads but has been questioned as a practical way of performing quantum computing due to the fragility of diagonal states with large coherent amplitudes. We show that using error correction only small amplitudes (alpha>1.2) are required for fault-tolerant quantum computing. We study fault tolerance under the effects of small amplitudes and loss using a Monte Carlo simulation. The first encoding level resources are orders of magnitude lower than the best single photon scheme.

  20. Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.

    2012-01-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  1. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  2. Analysis and design of algorithm-based fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. Sukumaran

    1990-01-01

    An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.

  3. Neural Network Technique for Global Ocean Color (Chl-a) Estimates Bridging Multiple Satellite Missions

    NASA Astrophysics Data System (ADS)

    Garraffo, Z. D.; Nadiga, S.; Krasnopolsky, V.; Mehra, A.; Bayler, E. J.; Kim, H. C.; Behringer, D.

    2016-02-01

    A Neural Network (NN) technique is used to produce consistent global ocean color estimates, bridging multiple satellite ocean color missions by linking ocean color variability - primarily driven by biological processes - with the physical processes of the upper ocean. Satellite-derived surface variables - sea-surface temperature (SST) and sea-surface height (SSH) fields - are used as signatures of upper-ocean dynamics. The NN technique employs adaptive weights that are tuned by applying statistical learning (training) algorithms to past data sets, providing robustness with respect to random noise, accuracy, fast emulations, and fault-tolerance. This study employs Sea-viewing Wide Field-of-View Sensor (SeaWiFS) chlorophyll-a data for 1998-2010 in conjunction with satellite SSH and SST fields. After interpolating all data sets to the same two-degree latitude-longitude grid, the annual mean was removed and monthly anomalies extracted . The NN technique wass trained for even years of that period and tested for errors and bias for the odd years. The NN output are assessed for: (i) bias, (ii) variability, (iii) root-mean-square error (RMSE), and (iv) cross-correlation. A Jacobian is evaluated to estimate the impact of each input (SSH, SST) on the NN chlorophyll-a estimates. The differences between an ensemble of NNs vs a single NN are examined. After the NN is trained for the SeaWiFS period, the NN is then applied and validated for 2005-2015, a period covered by other satellite missions — the Moderate Resolution Imaging Spectroradiometer (MODIS AQUA) and the Visible Imaging Infrared Radiometer Suite (VIIRS).

  4. Development of Mini-pole Superconducting Undulator

    NASA Astrophysics Data System (ADS)

    Jan, J. C.; Hwang, C. S.; Lin, P. H.; Chang, C. H.; Lin, F. Y.

    2007-01-01

    A mini-pole superconducting undulator with a 15mm period length (SU15) was developed at the National Synchrotron Radiation Research Center (NSRRC). The coil was wound by a superconducting (SC) NbTi wire with small dimensions and low Cu/SC ratio. The design field strength of SU15 with 158turns/pole was 1.4T at 215A, and the magnet gap was 5.6 mm. Extra trim coils and poles are mounted on the main iron pole. The trim coils directly compensate for the strength error of the peak field. The prototype racetrack iron pole was fabricated via electric discharge machining to produce a complete set of 40-poles. The coil was impregnated by epoxy and wrapped in Kapton to maintain insulation between coil and iron pole. A substitution beam duct was built and assembled with the magnet array and tested in the test Dewar. The conceptual design of bath liquid helium (LHe) cryostat has to tolerate more image current and radiation heating on the beam duct.

  5. Adiabatic gate teleportation.

    PubMed

    Bacon, Dave; Flammia, Steven T

    2009-09-18

    The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.

  6. Distributed asynchronous microprocessor architectures in fault tolerant integrated flight systems

    NASA Technical Reports Server (NTRS)

    Dunn, W. R.

    1983-01-01

    The paper discusses the implementation of fault tolerant digital flight control and navigation systems for rotorcraft application. It is shown that in implementing fault tolerance at the systems level using advanced LSI/VLSI technology, aircraft physical layout and flight systems requirements tend to define a system architecture of distributed, asynchronous microprocessors in which fault tolerance can be achieved locally through hardware redundancy and/or globally through application of analytical redundancy. The effects of asynchronism on the execution of dynamic flight software is discussed. It is shown that if the asynchronous microprocessors have knowledge of time, these errors can be significantly reduced through appropiate modifications of the flight software. Finally, the papear extends previous work to show that through the combined use of time referencing and stable flight algorithms, individual microprocessors can be configured to autonomously tolerate intermittent faults.

  7. Mapping of a major QTL for salt tolerance of mature field-grown maize plants based on SNP markers.

    PubMed

    Luo, Meijie; Zhao, Yanxin; Zhang, Ruyang; Xing, Jinfeng; Duan, Minxiao; Li, Jingna; Wang, Naishun; Wang, Wenguang; Zhang, Shasha; Chen, Zhihui; Zhang, Huasheng; Shi, Zi; Song, Wei; Zhao, Jiuran

    2017-08-15

    Salt stress significantly restricts plant growth and production. Maize is an important food and economic crop but is also a salt sensitive crop. Identification of the genetic architecture controlling salt tolerance facilitates breeders to select salt tolerant lines. However, the critical quantitative trait loci (QTLs) responsible for the salt tolerance of field-grown maize plants are still unknown. To map the main genetic factors contributing to salt tolerance in mature maize, a double haploid population (240 individuals) and 1317 single nucleotide polymorphism (SNP) markers were employed to produce a genetic linkage map covering 1462.05 cM. Plant height of mature maize cultivated in the saline field (SPH) and plant height-based salt tolerance index (ratio of plant height between saline and control fields, PHI) were used to evaluate salt tolerance of mature maize plants. A major QTL for SPH was detected on Chromosome 1 with the LOD score of 22.4, which explained 31.2% of the phenotypic variation. In addition, the major QTL conditioning PHI was also mapped at the same position on Chromosome 1, and two candidate genes involving in ion homeostasis were identified within the confidence interval of this QTL. The detection of the major QTL in adult maize plant establishes the basis for the map-based cloning of genes associated with salt tolerance and provides a potential target for marker assisted selection in developing maize varieties with salt tolerance.

  8. Verifiable fault tolerance in measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Fujii, Keisuke; Hayashi, Masahito

    2017-09-01

    Quantum systems, in general, cannot be simulated efficiently by a classical computer, and hence are useful for solving certain mathematical problems and simulating quantum many-body systems. This also implies, unfortunately, that verification of the output of the quantum systems is not so trivial, since predicting the output is exponentially hard. As another problem, the quantum system is very delicate for noise and thus needs an error correction. Here, we propose a framework for verification of the output of fault-tolerant quantum computation in a measurement-based model. In contrast to existing analyses on fault tolerance, we do not assume any noise model on the resource state, but an arbitrary resource state is tested by using only single-qubit measurements to verify whether or not the output of measurement-based quantum computation on it is correct. Verifiability is equipped by a constant time repetition of the original measurement-based quantum computation in appropriate measurement bases. Since full characterization of quantum noise is exponentially hard for large-scale quantum computing systems, our framework provides an efficient way to practically verify the experimental quantum error correction.

  9. Assessing the Progress of Trapped-Ion Processors Towards Fault-Tolerant Quantum Computation

    NASA Astrophysics Data System (ADS)

    Bermudez, A.; Xu, X.; Nigmatullin, R.; O'Gorman, J.; Negnevitsky, V.; Schindler, P.; Monz, T.; Poschinger, U. G.; Hempel, C.; Home, J.; Schmidt-Kaler, F.; Biercuk, M.; Blatt, R.; Benjamin, S.; Müller, M.

    2017-10-01

    A quantitative assessment of the progress of small prototype quantum processors towards fault-tolerant quantum computation is a problem of current interest in experimental and theoretical quantum information science. We introduce a necessary and fair criterion for quantum error correction (QEC), which must be achieved in the development of these quantum processors before their sizes are sufficiently big to consider the well-known QEC threshold. We apply this criterion to benchmark the ongoing effort in implementing QEC with topological color codes using trapped-ion quantum processors and, more importantly, to guide the future hardware developments that will be required in order to demonstrate beneficial QEC with small topological quantum codes. In doing so, we present a thorough description of a realistic trapped-ion toolbox for QEC and a physically motivated error model that goes beyond standard simplifications in the QEC literature. We focus on laser-based quantum gates realized in two-species trapped-ion crystals in high-optical aperture segmented traps. Our large-scale numerical analysis shows that, with the foreseen technological improvements described here, this platform is a very promising candidate for fault-tolerant quantum computation.

  10. Application of the Exradin W1 scintillator to determine Ediode 60017 and microDiamond 60019 correction factors for relative dosimetry within small MV and FFF fields.

    PubMed

    Underwood, T S A; Rowland, B C; Ferrand, R; Vieillevigne, L

    2015-09-07

    In this work we use EBT3 film measurements at 10 MV to demonstrate the suitability of the Exradin W1 (plastic scintillator) for relative dosimetry within small photon fields. We then use the Exradin W1 to measure the small field correction factors required by two other detectors: the PTW unshielded Ediode 60017 and the PTW microDiamond 60019. We consider on-axis correction-factors for small fields collimated using MLCs for four different TrueBeam energies: 6 FFF, 6 MV, 10 FFF and 10 MV. We also investigate percentage depth dose and lateral profile perturbations. In addition to high-density effects from its silicon sensitive region, the Ediode exhibited a dose-rate dependence and its known over-response to low energy scatter was found to be greater for 6 FFF than 6 MV. For clinical centres without access to a W1 scintillator, we recommend the microDiamond over the Ediode and suggest that 'limits of usability', field sizes below which a detector introduces unacceptable errors, can form a practical alternative to small-field correction factors. For a dosimetric tolerance of 2% on-axis, the microDiamond might be utilised down to 10 mm and 15 mm field sizes for 6 MV and 10 MV, respectively.

  11. Biaxial Anisotropic Material Development and Characterization using Rectangular to Square Waveguide

    DTIC Science & Technology

    2015-03-26

    holder 68 Figure 29. Measurement Setup with Test port cables and Network Analyzer VNA and the waveguide adapters are torqued to specification with...calibrated torque wrenches and waveguide flanges are aligned using precision alignment pins. A TRL calibration is performed prior to measuring the sample as...set to 0.0001. This enables the Frequency domain solver to refine the mesh until the tolerance is achieved. Tightening the error tolerance results in

  12. Low-Power Fault Tolerance for Spacecraft FPGA-Based Numerical Computing

    DTIC Science & Technology

    2006-09-01

    Ranganathan , “Power Management – Guest Lecture for CS4135, NPS,” Naval Postgraduate School, Nov 2004 [32] R. L. Phelps, “Operational Experiences with the...4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2...undesirable, are not necessarily harmful. Our intent is to prevent errors by properly managing faults. This research focuses on developing fault-tolerant

  13. Simulation of rare events in quantum error correction

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Vargo, Alexander

    2013-12-01

    We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.

  14. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  15. Hessian matrix approach for determining error field sensitivity to coil deviations.

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; ...

    2018-03-15

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less

  16. Hessian matrix approach for determining error field sensitivity to coil deviations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less

  17. Field validation of secondary data sources: a novel measure of representativity applied to a Canadian food outlet database.

    PubMed

    Clary, Christelle M; Kestens, Yan

    2013-06-19

    Validation studies of secondary datasets used to characterize neighborhood food businesses generally evaluate how accurately the database represents the true situation on the ground. Depending on the research objectives, the characterization of the business environment may tolerate some inaccuracies (e.g. minor imprecisions in location or errors in business names). Furthermore, if the number of false negatives (FNs) and false positives (FPs) is balanced within a given area, one could argue that the database still provides a "fair" representation of existing resources in this area. Yet, traditional validation measures do not relax matching criteria, and treat FNs and FPs independently. Through the field validation of food businesses found in a Canadian database, this paper proposes alternative criteria for validity. Field validation of the 2010 Enhanced Points of Interest (EPOI) database (DMTI Spatial®) was performed in 2011 in 12 census tracts (CTs) in Montreal, Canada. Some 410 food outlets were extracted from the database and 484 were observed in the field. First, traditional measures of sensitivity and positive predictive value (PPV) accounting for every single mismatch between the field and the database were computed. Second, relaxed measures of sensitivity and PPV that tolerate mismatches in business names or slight imprecisions in location were assessed. A novel measure of representativity that further allows for compensation between FNs and FPs within the same business category and area was proposed. Representativity was computed at CT level as ((TPs +|FPs-FNs|)/(TPs+FNs)), with TPs meaning true positives, and |FPs-FNs| being the absolute value of the difference between the number of FNs and the number of FPs within each outlet category. The EPOI database had a "moderate" capacity to detect an outlet present in the field (sensitivity: 54.5%) or to list only the outlets that actually existed in the field (PPV: 64.4%). Relaxed measures of sensitivity and PPV were respectively 65.5% and 77.3%. The representativity of the EPOI database was 77.7%. The novel measure of representativity might serve as an alternative to traditional validity measures, and could be more appropriate in certain situations, depending on the nature and scale of the research question.

  18. Final tolerancing approach and the value of short-cutting tolerances by measurement

    NASA Astrophysics Data System (ADS)

    Grupp, Frank; Prieto, Eric; Geis, Norbert; Bode, Andreas; Bodendorf, Christof; Costille, Anne; Katterloher, Reinhard; Penka, Daniela; Bender, Ralf

    2016-07-01

    Within the ESAs 2015 - 2025 Cosmic Vision framework the 1.2 m aperture EUCLID space telescope addresses cosmological questions related to dark matter and dark energy. Being equipped with two instruments that are simultaneously observing patches of > 0.5 square degree on the sky EUCLID is aiming at major cosmological probes in a large seven years survey scanning the entire extragalactic sky. These two instruments, the visual light high spacial resolution imager (VIS) and the near infrared spectrometer and photometer (NISP) are separated by a dichroic beam splitter. Its huge field of view (FoV) - larger than the full moon disk - together with high demands on the optical performance and strong requirements on in flight stability lead to very challenging demands on alignment and post launch - post cool-down optical element position. The role of an accurate and trust-worthy tolerance analysis which is well adopted to the stepwise integration and alignment concept, as well as to the missions stability properties is therefore crucial for the missions success. While the previous contributions of this series of papers (e.g.[1])was addressing the technical aspects of tolerancing, the mechanical challenges and the answers of the NISP instrument to these challenges, this paper will focus on our concept of shortcutting the tolerance chain by measurement wherever useful and possible. The NISP instrument is only possible, due to the innovative use of technologies such as computer generated hologram (CGH) based manufacturing and alignment. Expanding this concept, certain steps in the assembly process, such as focal length determination before detector placement allow to reduce the overall tolerance induced imaging errors. With this papers we show three major examples of this shortcutting strategy.

  19. Least-Squares Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1990-01-01

    Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.

  20. Reliability issues in active control of large flexible space structures

    NASA Technical Reports Server (NTRS)

    Vandervelde, W. E.

    1986-01-01

    Efforts in this reporting period were centered on four research tasks: design of failure detection filters for robust performance in the presence of modeling errors, design of generalized parity relations for robust performance in the presence of modeling errors, design of failure sensitive observers using the geometric system theory of Wonham, and computational techniques for evaluation of the performance of control systems with fault tolerance and redundancy management

  1. 40 CFR 174.502 - Bacillus thuringiensis Cry1A.105 protein; exemption from the requirement of a tolerance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., grain; corn, field, grits; corn, field, meal; corn, field, refined oil; corn, field, stover; corn, sweet... time-limited exemption from the requirement of a tolerance is established for residues of Bacillus... byproducts; cotton, hay; cotton, hulls; cotton, meal; cotton, refined oil; and cotton, undelinted seed when...

  2. 40 CFR 174.502 - Bacillus thuringiensis Cry1A.105 protein; exemption from the requirement of a tolerance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., grain; corn, field, grits; corn, field, meal; corn, field, refined oil; corn, field, stover; corn, sweet... time-limited exemption from the requirement of a tolerance is established for residues of Bacillus... byproducts; cotton, hay; cotton, hulls; cotton, meal; cotton, refined oil; and cotton, undelinted seed when...

  3. 40 CFR 180.1206 - Aspergillus flavus AF36; exemption from the requirement of a tolerance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... tolerance is established for residues of Aspergillus flavus AF36 in or on corn, field, forage; corn, field, grain; corn, field, stover; corn, field, aspirated grain fractions; corn, sweet, kernel plus cob with husk removed; corn, sweet, forage; corn, sweet, stover; corn, pop, grain; and corn, pop, stover, when...

  4. 40 CFR 180.1206 - Aspergillus flavus AF36; exemption from the requirement of a tolerance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... tolerance is established for residues of Aspergillus flavus AF36 in or on corn, field, forage; corn, field, grain; corn, field, stover; corn, field, aspirated grain fractions; corn, sweet, kernel plus cob with husk removed; corn, sweet, forage; corn, sweet, stover; corn, pop, grain; and corn, pop, stover, when...

  5. 40 CFR 180.1206 - Aspergillus flavus AF36; exemption from the requirement of a tolerance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... tolerance is established for residues of Aspergillus flavus AF36 in or on corn, field, forage; corn, field, grain; corn, field, stover; corn, field, aspirated grain fractions; corn, sweet, kernel plus cob with husk removed; corn, sweet, forage; corn, sweet, stover; corn, pop, grain; and corn, pop, stover, when...

  6. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  7. Frame synchronization performance and analysis

    NASA Technical Reports Server (NTRS)

    Aguilera, C. S. R.; Swanson, L.; Pitt, G. H., III

    1988-01-01

    The analysis used to generate the theoretical models showing the performance of the frame synchronizer is described for various frame lengths and marker lengths at various signal to noise ratios and bit error tolerances.

  8. Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D

    DOE PAGES

    Haye, R. J. La; Paz-Soldan, C.; Strait, E. J.

    2015-01-23

    DIII-D experiments show that fully penetrated resonant n=1 error field locked modes in Ohmic plasmas with safety factor q 95≳3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n=2/1) static error fields are shielded in Ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption.

  9. On the use of inexact, pruned hardware in atmospheric modelling

    PubMed Central

    Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.

    2014-01-01

    Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031

  10. Wave-optical assessment of alignment tolerances in nano-focusing with ellipsoidal mirror

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yumoto, Hirokatsu, E-mail: yumoto@spring8.or.jp; Koyama, Takahisa; Matsuyama, Satoshi

    2016-01-28

    High-precision ellipsoidal mirrors, which can efficiently focus X-rays to the nanometer dimension with a mirror, have not been realized because of the difficulties in the fabrication process. The purpose of our study was to develop nano-focusing ellipsoidal mirrors in the hard X-ray region. We developed a wave-optical focusing simulator for investigating alignment tolerances in nano-focusing with a designed ellipsoidal mirror, which produce a diffraction-limited focus size of 30 × 35 nm{sup 2} in full width at half maximum at an X-ray energy of 7 keV. The simulator can calculate focusing intensity distributions around the focal point under conditions of misalignment. Themore » wave-optical simulator enabled the calculation of interference intensity distributions, which cannot be predicted by the conventional ray-trace method. The alignment conditions with a focal length error of ≲ ±10 µm, incident angle error of ≲ ±0.5 µrad, and in-plane rotation angle error of ≲ ±0.25 µrad must be satisfied for nano-focusing.« less

  11. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    PubMed

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  12. Fixing Stellarator Magnetic Surfaces

    NASA Astrophysics Data System (ADS)

    Hanson, James D.

    1999-11-01

    Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.

  13. Experimental evolution reveals high insecticide tolerance in Daphnia inhabiting farmland ponds

    PubMed Central

    Jansen, Mieke; Coors, Anja; Vanoverbeke, Joost; Schepens, Melissa; De Voogt, Pim; De Schamphelaere, Karel A C; De Meester, Luc

    2015-01-01

    Exposure of nontarget populations to agricultural chemicals is an important aspect of global change. We quantified the capacity of natural Daphnia magna populations to locally adapt to insecticide exposure through a selection experiment involving carbaryl exposure and a control. Carbaryl tolerance after selection under carbaryl exposure did not increase significantly compared to the tolerance of the original field populations. However, there was evolution of a decreased tolerance in the control experimental populations compared to the original field populations. The magnitude of this decrease was positively correlated with land use intensity in the neighbourhood of the ponds from which the original populations were sampled. The genetic change in carbaryl tolerance in the control rather than in the carbaryl treatment suggests widespread selection for insecticide tolerance in the field associated with land use intensity and suggests that this evolution comes at a cost. Our data suggest a strong impact of current agricultural land use on nontarget natural Daphnia populations. PMID:26029258

  14. Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.

  15. Adaptive robust fault tolerant control design for a class of nonlinear uncertain MIMO systems with quantization.

    PubMed

    Ao, Wei; Song, Yongdong; Wen, Changyun

    2017-05-01

    In this paper, we investigate the adaptive control problem for a class of nonlinear uncertain MIMO systems with actuator faults and quantization effects. Under some mild conditions, an adaptive robust fault-tolerant control is developed to compensate the affects of uncertainties, actuator failures and errors caused by quantization, and a range of the parameters for these quantizers is established. Furthermore, a Lyapunov-like approach is adopted to demonstrate that the ultimately uniformly bounded output tracking error is guaranteed by the controller, and the signals of the closed-loop system are ensured to be bounded, even in the presence of at most m-q actuators stuck or outage. Finally, numerical simulations are provided to verify and illustrate the effectiveness of the proposed adaptive schemes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures

    NASA Technical Reports Server (NTRS)

    Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.

    1990-01-01

    A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.

  17. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.

  18. Error Correction for the JLEIC Ion Collider Ring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Guohui; Morozov, Vasiliy; Lin, Fanglei

    2016-05-01

    The sensitivity to misalignment, magnet strength error, and BPM noise is investigated in order to specify design tolerances for the ion collider ring of the Jefferson Lab Electron Ion Collider (JLEIC) project. Those errors, including horizontal, vertical, longitudinal displacement, roll error in transverse plane, strength error of main magnets (dipole, quadrupole, and sextupole), BPM noise, and strength jitter of correctors, cause closed orbit distortion, tune change, beta-beat, coupling, chromaticity problem, etc. These problems generally reduce the dynamic aperture at the Interaction Point (IP). According to real commissioning experiences in other machines, closed orbit correction, tune matching, beta-beat correction, decoupling, andmore » chromaticity correction have been done in the study. Finally, we find that the dynamic aperture at the IP is restored. This paper describes that work.« less

  19. Extremal Optimization for estimation of the error threshold in topological subsystem codes at T = 0

    NASA Astrophysics Data System (ADS)

    Millán-Otoya, Jorge E.; Boettcher, Stefan

    2014-03-01

    Quantum decoherence is a problem that arises in implementations of quantum computing proposals. Topological subsystem codes (TSC) have been suggested as a way to overcome decoherence. These offer a higher optimal error tolerance when compared to typical error-correcting algorithms. A TSC has been translated into a planar Ising spin-glass with constrained bimodal three-spin couplings. This spin-glass has been considered at finite temperature to determine the phase boundary between the unstable phase and the stable phase, where error recovery is possible.[1] We approach the study of the error threshold problem by exploring ground states of this spin-glass with the Extremal Optimization algorithm (EO).[2] EO has proven to be a effective heuristic to explore ground state configurations of glassy spin-systems.[3

  20. Tolerant compressed sensing with partially coherent sensing matrices

    NASA Astrophysics Data System (ADS)

    Birnbaum, Tobias; Eldar, Yonina C.; Needell, Deanna

    2017-08-01

    Most of compressed sensing (CS) theory to date is focused on incoherent sensing, that is, columns from the sensing matrix are highly uncorrelated. However, sensing systems with naturally occurring correlations arise in many applications, such as signal detection, motion detection and radar. Moreover, in these applications it is often not necessary to know the support of the signal exactly, but instead small errors in the support and signal are tolerable. Despite the abundance of work utilizing incoherent sensing matrices, for this type of tolerant recovery we suggest that coherence is actually beneficial . We promote the use of coherent sampling when tolerant support recovery is acceptable, and demonstrate its advantages empirically. In addition, we provide a first step towards theoretical analysis by considering a specific reconstruction method for selected signal classes.

  1. Integrated polarization beam splitter with relaxed fabrication tolerances.

    PubMed

    Pérez-Galacho, D; Halir, R; Ortega-Moñux, A; Alonso-Ramos, C; Zhang, R; Runge, P; Janiak, K; Bach, H-G; Steffan, A G; Molina-Fernández, Í

    2013-06-17

    Polarization handling is a key requirement for the next generation of photonic integrated circuits (PICs). Integrated polarization beam splitters (PBS) are central elements for polarization management, but their use in PICs is hindered by poor fabrication tolerances. In this work we present a fully passive, highly fabrication tolerant polarization beam splitter, based on an asymmetrical Mach-Zehnder interferometer (MZI) with a Si/SiO(2) Periodic Layer Structure (PLS) on top of one of its arms. By engineering the birefringence of the PLS we are able to design the MZI arms so that sensitivities to the most critical fabrication errors are greatly reduced. Our PBS design tolerates waveguide width variations of 400nm maintaining a polarization extinction ratio better than 13dB in the complete C-Band.

  2. Design Trade-off Between Performance and Fault-Tolerance of Space Onboard Computers

    NASA Astrophysics Data System (ADS)

    Gorbunov, M. S.; Antonov, A. A.

    2017-01-01

    It is well known that there is a trade-off between performance and power consumption in onboard computers. The fault-tolerance is another important factor affecting performance, chip area and power consumption. Involving special SRAM cells and error-correcting codes is often too expensive with relation to the performance needed. We discuss the possibility of finding the optimal solutions for modern onboard computer for scientific apparatus focusing on multi-level cache memory design.

  3. Generating a fault-tolerant global clock using high-speed control signals for the MetaNet architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ofek, Y.

    1994-05-01

    This work describes a new technique, based on exchanging control signals between neighboring nodes, for constructing a stable and fault-tolerant global clock in a distributed system with an arbitrary topology. It is shown that it is possible to construct a global clock reference with time step that is much smaller than the propagation delay over the network's links. The synchronization algorithm ensures that the global clock tick' has a stable periodicity, and therefore, it is possible to tolerate failures of links and clocks that operate faster and/or slower than nominally specified, as well as hard failures. The approach taken inmore » this work is to generate a global clock from the ensemble of the local transmission clocks and not to directly synchronize these high-speed clocks. The steady-state algorithm, which generates the global clock, is executed in hardware by the network interface of each node. At the network interface, it is possible to measure accurately the propagation delay between neighboring nodes with a small error or uncertainty and thereby to achieve global synchronization that is proportional to these error measurements. It is shown that the local clock drift (or rate uncertainty) has only a secondary effect on the maximum global clock rate. The synchronization algorithm can tolerate any physical failure. 18 refs.« less

  4. Using certification trails to achieve software fault tolerance

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Masson, Gerald M.

    1993-01-01

    A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.

  5. The Calibration of Gloss Reference Standards

    NASA Astrophysics Data System (ADS)

    Budde, W.

    1980-04-01

    In present international and national standards for the measurement of specular gloss the primary and secondary reference standards are defined for monochromatic radiation. However the glossmeter specified is using polychromatic radiation (CIE Standard Illuminant C) and the CIE Standard Photometric Observer. This produces errors in practical gloss measurements of up to 0.5%. Although this may be considered small as compared to the accuracy of most practical gloss measurements, such an error should not be tolerated in the calibration of secondary standards. Corrections for such errors are presented and various alternatives for amendments of the existing documentary standards are discussed.

  6. Investigating System Dependability Modeling Using AADL

    NASA Technical Reports Server (NTRS)

    Hall, Brendan; Driscoll, Kevin R.; Madl, Gabor

    2013-01-01

    This report describes Architecture Analysis & Design Language (AADL) models for a diverse set of fault-tolerant, embedded data networks and describes the methods and tools used to created these models. It also includes error models per the AADL Error Annex. Some networks were modeled using Error Detection Isolation Containment Types (EDICT). This report gives a brief description for each of the networks, a description of its modeling, the model itself, and evaluations of the tools used for creating the models. The methodology includes a naming convention that supports a systematic way to enumerate all of the potential failure modes.

  7. Recovery of chemical Estimates by Field Inhomogeneity Neighborhood Error Detection (REFINED): Fat/Water Separation at 7T

    PubMed Central

    Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.

    2012-01-01

    I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815

  8. Recovery of chemical estimates by field inhomogeneity neighborhood error detection (REFINED): fat/water separation at 7 tesla.

    PubMed

    Narayan, Sreenath; Kalhan, Satish C; Wilson, David L

    2013-05-01

    To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.

  9. Dynamic assertion testing of flight control software

    NASA Technical Reports Server (NTRS)

    Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.

    1985-01-01

    An experiment in using assertions to dynamically test fault tolerant flight software is described. The experiment showed that 87% of typical errors introduced into the program would be detected by assertions. Detailed analysis of the test data showed that the number of assertions needed to detect those errors could be reduced to a minimal set. The analysis also revealed that the most effective assertions tested program parameters that provided greater indirect (collateral) testing of other parameters.

  10. System of tolerances for a solar-tower power station

    NASA Astrophysics Data System (ADS)

    Aparisi, R. R.; Tepliakov, D. I.

    The principles underlying the establishment of a system of tolerances for a solar-tower station are presented. Attention is given to static and dynamic tolerances and deviations for a single heliostat, and geometrical tolerances for a field of heliostats.

  11. Performance and evaluation of real-time multicomputer control systems

    NASA Technical Reports Server (NTRS)

    Shin, K. G.

    1985-01-01

    Three experiments on fault tolerant multiprocessors (FTMP) were begun. They are: (1) measurement of fault latency in FTMP; (2) validation and analysis of FTMP synchronization protocols; and investigation of error propagation in FTMP.

  12. 7 CFR 283.15 - Procedure for hearing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... evidence, the QC claim against the State agency for a QC error rate in excess of the tolerance level. The... admissible in evidence subject to such objections as to relevancy, materiality or competency of the testimony...

  13. 40 CFR 180.1219 - Foramsulfuron; exemption from the requirement of a tolerance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pesticide foramsulfuron is exempted from the requirement of a tolerance in corn, field, grain/corn, field, forage/ corn, field, stover/corn, pop, grain/corn, pop, forage/corn, pop, stover; corn, sweet, forage; corn, sweet, kernel plus cob with husks removed; corn, sweet, stover when applied as a herbicide in...

  14. 40 CFR 180.1219 - Foramsulfuron; exemption from the requirement of a tolerance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... pesticide foramsulfuron is exempted from the requirement of a tolerance in corn, field, grain/corn, field, forage/ corn, field, stover/corn, pop, grain/corn, pop, forage/corn, pop, stover; corn, sweet, forage; corn, sweet, kernel plus cob with husks removed; corn, sweet, stover when applied as a herbicide in...

  15. 40 CFR 180.1219 - Foramsulfuron; exemption from the requirement of a tolerance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... pesticide foramsulfuron is exempted from the requirement of a tolerance in corn, field, grain/corn, field, forage/ corn, field, stover/corn, pop, grain/corn, pop, forage/corn, pop, stover; corn, sweet, forage; corn, sweet, kernel plus cob with husks removed; corn, sweet, stover when applied as a herbicide in...

  16. 40 CFR 180.1219 - Foramsulfuron; exemption from the requirement of a tolerance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pesticide foramsulfuron is exempted from the requirement of a tolerance in corn, field, grain/corn, field, forage/ corn, field, stover/corn, pop, grain/corn, pop, forage/corn, pop, stover; corn, sweet, forage; corn, sweet, kernel plus cob with husks removed; corn, sweet, stover when applied as a herbicide in...

  17. 40 CFR 180.1219 - Foramsulfuron; exemption from the requirement of a tolerance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pesticide foramsulfuron is exempted from the requirement of a tolerance in corn, field, grain/corn, field, forage/ corn, field, stover/corn, pop, grain/corn, pop, forage/corn, pop, stover; corn, sweet, forage; corn, sweet, kernel plus cob with husks removed; corn, sweet, stover when applied as a herbicide in...

  18. Closing the Seasonal Ocean Surface Temperature Balance in the Eastern Tropical Oceans from Remote Sensing and Model Reanalyses

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Clayson, C. A.

    2012-01-01

    Residual forcing necessary to close the MLTB on seasonal time scales are largest in regions of strongest surface heat flux forcing. Identifying the dominant source of error - surface heat flux error, mixed layer depth estimation, ocean dynamical forcing - remains a challenge in the eastern tropical oceans where ocean processes are very active. Improved sub-surface observations are necessary to better constrain errors. 1. Mixed layer depth evolution is critical to the seasonal evolution of mixed layer temperatures. It determines the inertia of the mixed layer, and scales the sensitivity of the MLTB to errors in surface heat flux and ocean dynamical forcing. This role produces timing impacts for errors in SST prediction. 2. Errors in the MLTB are larger than the historical 10Wm-2 target accuracy. In some regions, a larger accuracy can be tolerated if the goal is to resolve the seasonal SST cycle.

  19. A theoretical basis for the analysis of multiversion software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques (known as fault-tolerant software) is an understanding of the impact of multiple joint occurrences of errors, referred to here as coincident errors. A theoretical basis for the study of redundant software is developed which: (1) provides a probabilistic framework for empirically evaluating the effectiveness of a general multiversion strategy when component versions are subject to coincident errors, and (2) permits an analytical study of the effects of these errors. An intensity function, called the intensity of coincident errors, has a central role in this analysis. This function describes the propensity of programmers to introduce design faults in such a way that software components fail together when executing in the application environment. A condition under which a multiversion system is a better strategy than relying on a single version is given.

  20. A new methodology for vibration error compensation of optical encoders.

    PubMed

    Lopez, Jesus; Artes, Mariano

    2012-01-01

    Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new "ad hoc" methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained.

  1. Fast Time-Varying Volume Rendering Using Time-Space Partition (TSP) Tree

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Chiang, Ling-Jen; Ma, Kwan-Liu

    1999-01-01

    We present a new, algorithm for rapid rendering of time-varying volumes. A new hierarchical data structure that is capable of capturing both the temporal and the spatial coherence is proposed. Conventional hierarchical data structures such as octrees are effective in characterizing the homogeneity of the field values existing in the spatial domain. However, when treating time merely as another dimension for a time-varying field, difficulties frequently arise due to the discrepancy between the field's spatial and temporal resolutions. In addition, treating spatial and temporal dimensions equally often prevents the possibility of detecting the coherence that is unique in the temporal domain. Using the proposed data structure, our algorithm can meet the following goals. First, both spatial and temporal coherence are identified and exploited for accelerating the rendering process. Second, our algorithm allows the user to supply the desired error tolerances at run time for the purpose of image-quality/rendering-speed trade-off. Third, the amount of data that are required to be loaded into main memory is reduced, and thus the I/O overhead is minimized. This low I/O overhead makes our algorithm suitable for out-of-core applications.

  2. Blueprint for a microwave trapped ion quantum computer.

    PubMed

    Lekitsch, Bjoern; Weidt, Sebastian; Fowler, Austin G; Mølmer, Klaus; Devitt, Simon J; Wunderlich, Christof; Hensinger, Winfried K

    2017-02-01

    The availability of a universal quantum computer may have a fundamental impact on a vast number of research fields and on society as a whole. An increasingly large scientific and industrial community is working toward the realization of such a device. An arbitrarily large quantum computer may best be constructed using a modular approach. We present a blueprint for a trapped ion-based scalable quantum computer module, making it possible to create a scalable quantum computer architecture based on long-wavelength radiation quantum gates. The modules control all operations as stand-alone units, are constructed using silicon microfabrication techniques, and are within reach of current technology. To perform the required quantum computations, the modules make use of long-wavelength radiation-based quantum gate technology. To scale this microwave quantum computer architecture to a large size, we present a fully scalable design that makes use of ion transport between different modules, thereby allowing arbitrarily many modules to be connected to construct a large-scale device. A high error-threshold surface error correction code can be implemented in the proposed architecture to execute fault-tolerant operations. With appropriate adjustments, the proposed modules are also suitable for alternative trapped ion quantum computer architectures, such as schemes using photonic interconnects.

  3. Branched-chain amino acid metabolism: from rare Mendelian diseases to more common disorders

    PubMed Central

    Burrage, Lindsay C.; Nagamani, Sandesh C.S.; Campeau, Philippe M.; Lee, Brendan H.

    2014-01-01

    Branched-chain amino acid (BCAA) metabolism plays a central role in the pathophysiology of both rare inborn errors of metabolism and the more common multifactorial diseases. Although deficiency of the branched-chain ketoacid dehydrogenase (BCKDC) and associated elevations in the BCAAs and their ketoacids have been recognized as the cause of maple syrup urine disease (MSUD) for decades, treatment options for this disorder have been limited to dietary interventions. In recent years, the discovery of improved leucine tolerance after liver transplantation has resulted in a new therapeutic strategy for this disorder. Likewise, targeting the regulation of the BCKDC activity may be an alternative potential treatment strategy for MSUD. The regulation of the BCKDC by the branched-chain ketoacid dehydrogenase kinase has also been implicated in a new inborn error of metabolism characterized by autism, intellectual disability and seizures. Finally, there is a growing body of literature implicating BCAA metabolism in more common disorders such as the metabolic syndrome, cancer and hepatic disease. This review surveys the knowledge acquired on the topic over the past 50 years and focuses on recent developments in the field of BCAA metabolism. PMID:24651065

  4. Design optimization of ultra-precise elliptical mirrors for hard x-ray nanofocusing at Nanoscopium

    NASA Astrophysics Data System (ADS)

    Kewish, Cameron M.; Polack, François; Signorato, Riccardo; Somogyi, Andrea

    2013-09-01

    The design and implementation of a pair of 100 mm-long grazing-incidence total-reflection mirrors for the hard X-ray beamline Nanoscopium at Synchrotron Soleil is presented. A vertically and horizontally nanofocusing mirror pair, oriented in Kirkpatrick-Baez geometry, has been designed and fabricated with the aim of creating a diffraction-limited high-intensity 5 - 20 keV beam with a focal spot size as small as 50 nm. We describe the design considerations, including wave-optical calculations of figures-of-merit that are relevant for spectromicroscopy, such as the focal spot size, depth of field and integrated intensity. The mechanical positioning tolerance in the pitch angle that is required to avoid introducing high-intensity features in the neighborhood of the focal spot is demonstrated with simulations to be of the order of microradians, becoming tighter for shorter focal lengths and therefore directly affecting all nanoprobe mirror systems. Metrology results for the completed mirrors are presented, showing that better than 1.5 °A-rms figure error has been achieved over the full mirror lengths with respect to the designed elliptical surfaces, with less than 60 nrad-rms slope errors.

  5. An ultrashort throw ratio projection lens design based on a catadioptric structure

    NASA Astrophysics Data System (ADS)

    Wang, Hsiu-Cheng; Pan, Jui-Wen

    2018-07-01

    In this paper, we present a rotational symmetry for an ultrashort throw (UST) lens with offset field. The UST lens has a throw ratio of 0.23 and a total track of 195 mm. The optical elements of the UST lens are comprised of two parts. First, a catadioptric projection lens where the catadioptric function permits reaching an ultrashort throw ratio, short total track, while at the same time requiring fewer lens elements. The second part is a collimating lens which takes advantage of the telecentric condition to generate uniform total internal reflection (TIR) in the TIR prism. With this design, an effective focal length of -1.96 mm and a f-number of 2.4 can be obtained. The root mean square spot size and lateral colour of all fields are smaller than one pixel in size. The maximum optical distortion of -0.97% and TV distortion of 0.2% are acceptable. In terms of image quality, the modulation transfer function (MTF) values for all fields are above 0.65 at 0.245 line pairs/mm. Even when the tolerance error is considered, the MTF values for all fields are still above 0.3. The suitability of the novel UST lens design for projection applications is discussed.

  6. Robust fault tolerant control based on sliding mode method for uncertain linear systems with quantization.

    PubMed

    Hao, Li-Ying; Yang, Guang-Hong

    2013-09-01

    This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  7. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    NASA Astrophysics Data System (ADS)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress extremely well. We observe that the overhead for compressing data with ZFP is low, and that compressing data in memory reduces the amount of memory overhead needed to store the virtual files before they are flushed to disk.

  8. Modified zirconium-eriochrome cyanine R determination of fluoride

    USGS Publications Warehouse

    Thatcher, L.L.

    1957-01-01

    The Eriochrome Cyanine R method for determining fluoride in natural water has been modified to provide a single, stable reagent solution, eliminate interference from oxidizing agents, extend the concentration range to 3 p.p.m., and extend the phosphate tolerance. Temperature effect was minimized; sulfate error was eliminated by precipitation. The procedure is sufficiently tolerant to interferences found in natural and polluted waters to permit the elimination of prior distillation for most samples. The method has been applied to 500 samples.

  9. New-Sum: A Novel Online ABFT Scheme For General Iterative Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram

    Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less

  10. Software fault-tolerance by design diversity DEDIX: A tool for experiments

    NASA Technical Reports Server (NTRS)

    Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.

    1986-01-01

    The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.

  11. Prescribed-performance fault-tolerant control for feedback linearisable systems with an aircraft application

    NASA Astrophysics Data System (ADS)

    Gao, Gang; Wang, Jinzhi; Wang, Xianghua

    2017-05-01

    This paper investigates fault-tolerant control (FTC) for feedback linearisable systems (FLSs) and its application to an aircraft. To ensure desired transient and steady-state behaviours of the tracking error under actuator faults, the dynamic effect caused by the actuator failures on the error dynamics of a transformed model is analysed, and three control strategies are designed. The first FTC strategy is proposed as a robust controller, which relies on the explicit information about several parameters of the actuator faults. To eliminate the need for these parameters and the input chattering phenomenon, the robust control law is later combined with the adaptive technique to generate the adaptive FTC law. Next, the adaptive control law is further improved to achieve the prescribed performance under more severe input disturbance. Finally, the proposed control laws are applied to an air-breathing hypersonic vehicle (AHV) subject to actuator failures, which confirms the effectiveness of the proposed strategies.

  12. Implementing a strand of a scalable fault-tolerant quantum computing fabric.

    PubMed

    Chow, Jerry M; Gambetta, Jay M; Magesan, Easwar; Abraham, David W; Cross, Andrew W; Johnson, B R; Masluk, Nicholas A; Ryan, Colm A; Smolin, John A; Srinivasan, Srikanth J; Steffen, M

    2014-06-24

    With favourable error thresholds and requiring only nearest-neighbour interactions on a lattice, the surface code is an error-correcting code that has garnered considerable attention. At the heart of this code is the ability to perform a low-weight parity measurement of local code qubits. Here we demonstrate high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. With high-fidelity gates, we generate entanglement distributed across three superconducting qubits in a lattice where each code qubit is coupled to two bus resonators. Via high-fidelity measurement of the syndrome qubit, we deterministically entangle the code qubits in either an even or odd parity Bell state, conditioned on the syndrome qubit state. Finally, to fully characterize this parity readout, we develop a measurement tomography protocol. The lattice presented naturally extends to larger networks of qubits, outlining a path towards fault-tolerant quantum computing.

  13. A second generation experiment in fault-tolerant software

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    Information was collected on the efficacy of fault-tolerant software by conducting two large-scale controlled experiments. In the first, an empirical study of multi-version software (MVS) was conducted. The second experiment is an empirical evaluation of self testing as a method of error detection (STED). The purpose ot the MVS experiment was to obtain empirical measurement of the performance of multi-version systems. Twenty versions of a program were prepared at four different sites under reasonably realistic development conditions from the same specifications. The purpose of the STED experiment was to obtain empirical measurements of the performance of assertions in error detection. Eight versions of a program were modified to include assertions at two different sites under controlled conditions. The overall structure of the testing environment for the MVS experiment and its status are described. Work to date in the STED experiment is also presented.

  14. Active stabilization of error field penetration via control field and bifurcation of its stable frequency range

    NASA Astrophysics Data System (ADS)

    Inoue, S.; Shiraishi, J.; Takechi, M.; Matsunaga, G.; Isayama, A.; Hayashi, N.; Ide, S.

    2017-11-01

    An active stabilization effect of a rotating control field against an error field penetration is numerically studied. We have developed a resistive magnetohydrodynamic code ‘AEOLUS-IT’, which can simulate plasma responses to rotating/static external magnetic field. Adopting non-uniform flux coordinates system, the AEOLUS-IT simulation can employ high magnetic Reynolds number condition relevant to present tokamaks. By AEOLUS-IT, we successfully clarified the stabilization mechanism of the control field against the error field penetration. Physical processes of a plasma rotation drive via the control field are demonstrated by the nonlinear simulation, which reveals that the rotation amplitude at a resonant surface is not a monotonic function of the control field frequency, but has an extremum. Consequently, two ‘bifurcated’ frequency ranges of the control field are found for the stabilization of the error field penetration.

  15. How the Hilbert integral theorem inspired flow lines

    NASA Astrophysics Data System (ADS)

    Winston, Roland; Jiang, Lun

    2017-09-01

    Nonimaging Optics has been shown to achieve the theoretical limits constrained only by thermodynamic principles. The designing principles of nonimaging optics allow a non-conventional way of thinking about and generating new optical devices. Compared to conventional imaging optics which rarely utilizes the framework of thermodynamic arguments, nonimaging optics chooses to map etendue instead of rays. This fundamental shift of design paradigm frees the optics design from ray based designs which heavily relies on error tolerance analysis. Instead, the underlying thermodynamic principles guide the nonimaging design to be naturally constructed for extended light source for illumination, non-tracking concentrators and sensors that require sharp cut-off angles. We argue in this article that such optical devices which has enabled a multitude of applications depends on probabilities, geometric flux field and radiative heat transfer while "optics" in the conventional sense recedes into the background.

  16. Technique for Very High Order Nonlinear Simulation and Validation

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2001-01-01

    Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.

  17. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  18. 40 CFR 180.469 - Dichlormid; tolerances for residues.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Parts per million Corn, field, forage 0.05 Corn, field, grain 0.05 Corn, field, stover 0.05 Corn, pop, grain 0.05 Corn, pop, stover 0.05 Corn, sweet, forage 0.05 Corn, sweet, kernel plus cob with husks removed 0.05 Corn, sweet, stover 0.05 (b) Section 18 emergency exemptions. [Reserved] (c) Tolerances with...

  19. 40 CFR 180.1206 - Aspergillus flavus AF36; exemption from the requirement of a tolerance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... flavis AF 36 is temporarily exempt from the requirement of a tolerance on corn, field, forage; corn, field, grain; corn, field, stover; corn, pop, grain; corn, pop, stover; corn, sweet, forage; corn, sweet, kernel plus cob with husks removed; corn, sweet, stover when used in accordance with the Experimental Use...

  20. 40 CFR 180.469 - Dichlormid; tolerances for residues.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Parts per million Corn, field, forage 0.05 Corn, field, grain 0.05 Corn, field, stover 0.05 Corn, pop, grain 0.05 Corn, pop, stover 0.05 Corn, sweet, forage 0.05 Corn, sweet, kernel plus cob with husks removed 0.05 Corn, sweet, stover 0.05 (b) Section 18 emergency exemptions. [Reserved] (c) Tolerances with...

  1. 40 CFR 180.469 - Dichlormid; tolerances for residues.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Parts per million Corn, field, forage 0.05 Corn, field, grain 0.05 Corn, field, stover 0.05 Corn, pop, grain 0.05 Corn, pop, stover 0.05 Corn, sweet, forage 0.05 Corn, sweet, kernel plus cob with husks removed 0.05 Corn, sweet, stover 0.05 (b) Section 18 emergency exemptions. [Reserved] (c) Tolerances with...

  2. 40 CFR 180.469 - Dichlormid; tolerances for residues.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Parts per million Corn, field, forage 0.05 Corn, field, grain 0.05 Corn, field, stover 0.05 Corn, pop, grain 0.05 Corn, pop, stover 0.05 Corn, sweet, forage 0.05 Corn, sweet, kernel plus cob with husks removed 0.05 Corn, sweet, stover 0.05 (b) Section 18 emergency exemptions. [Reserved] (c) Tolerances with...

  3. Efficient Variational Quantum Simulator Incorporating Active Error Minimization

    NASA Astrophysics Data System (ADS)

    Li, Ying; Benjamin, Simon C.

    2017-04-01

    One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.

  4. Error field detection in DIII-D by magnetic steering of locked modes

    DOE PAGES

    Shiraki, Daisuke; La Haye, Robert J.; Logan, Nikolas C.; ...

    2014-02-20

    Optimal correction coil currents for the n = 1 intrinsic error field of the DIII-D tokamak are inferred by applying a rotating external magnetic perturbation to steer the phase of a saturated locked mode with poloidal/toroidal mode number m/n = 2/1. The error field is detected non-disruptively in a single discharge, based on the toroidal torque balance of the resonant surface, which is assumed to be dominated by the balance of resonant electromagnetic torques. This is equivalent to the island being locked at all times to the resonant 2/1 component of the total of the applied and intrinsic error fields,more » such that the deviation of the locked mode phase from the applied field phase depends on the existing error field. The optimal set of correction coil currents is determined to be those currents which best cancels the torque from the error field, based on fitting of the torque balance model. The toroidal electromagnetic torques are calculated from experimental data using a simplified approach incorporating realistic DIII-D geometry, and including the effect of the plasma response on island torque balance based on the ideal plasma response to external fields. This method of error field detection is demonstrated in DIII-D discharges, and the results are compared with those based on the onset of low-density locked modes in ohmic plasmas. Furthermore, this magnetic steering technique presents an efficient approach to error field detection and is a promising method for ITER, particularly during initial operation when the lack of auxiliary heating systems makes established techniques based on rotation or plasma amplification unsuitable.« less

  5. 32-Bit-Wide Memory Tolerates Failures

    NASA Technical Reports Server (NTRS)

    Buskirk, Glenn A.

    1990-01-01

    Electronic memory system of 32-bit words corrects bit errors caused by some common type of failures - even failure of entire 4-bit-wide random-access-memory (RAM) chip. Detects failure of two such chips, so user warned that ouput of memory may contain errors. Includes eight 4-bit-wide DRAM's configured so each bit of each DRAM assigned to different one of four parallel 8-bit words. Each DRAM contributes only 1 bit to each 8-bit word.

  6. Measurement-based quantum communication with resource states generated by entanglement purification

    NASA Astrophysics Data System (ADS)

    Wallnöfer, J.; Dür, W.

    2017-01-01

    We investigate measurement-based quantum communication with noisy resource states that are generated by entanglement purification. We consider the transmission of encoded information via noisy quantum channels using a measurement-based implementation of encoding, error correction, and decoding. We show that such an approach offers advantages over direct transmission, gate-based error correction, and measurement-based schemes with direct generation of resource states. We analyze the noise structure of resource states generated by entanglement purification and show that a local error model, i.e., noise acting independently on all qubits of the resource state, is a good approximation in general, and provides an exact description for Greenberger-Horne-Zeilinger states. The latter are resources for a measurement-based implementation of error-correction codes for bit-flip or phase-flip errors. This provides an approach to link the recently found very high thresholds for fault-tolerant measurement-based quantum information processing based on local error models for resource states with error thresholds for gate-based computational models.

  7. Self-checking self-repairing computer nodes using the mirror processor

    NASA Technical Reports Server (NTRS)

    Tamir, Yuval

    1992-01-01

    Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.

  8. Fault tolerance issues in nanoelectronics

    NASA Astrophysics Data System (ADS)

    Spagocci, S. M.

    The astonishing success story of microelectronics cannot go on indefinitely. In fact, once devices reach the few-atom scale (nanoelectronics), transient quantum effects are expected to impair their behaviour. Fault tolerant techniques will then be required. The aim of this thesis is to investigate the problem of transient errors in nanoelectronic devices. Transient error rates for a selection of nanoelectronic gates, based upon quantum cellular automata and single electron devices, in which the electrostatic interaction between electrons is used to create Boolean circuits, are estimated. On the bases of such results, various fault tolerant solutions are proposed, for both logic and memory nanochips. As for logic chips, traditional techniques are found to be unsuitable. A new technique, in which the voting approach of triple modular redundancy (TMR) is extended by cascading TMR units composed of nanogate clusters, is proposed and generalised to other voting approaches. For memory chips, an error correcting code approach is found to be suitable. Various codes are considered and a lookup table approach is proposed for encoding and decoding. We are then able to give estimations for the redundancy level to be provided on nanochips, so as to make their mean time between failures acceptable. It is found that, for logic chips, space redundancies up to a few tens are required, if mean times between failures have to be of the order of a few years. Space redundancy can also be traded for time redundancy. As for memory chips, mean times between failures of the order of a few years are found to imply both space and time redundancies of the order of ten.

  9. Many tests of significance: new methods for controlling type I errors.

    PubMed

    Keselman, H J; Miller, Charles W; Holland, Burt

    2011-12-01

    There have been many discussions of how Type I errors should be controlled when many hypotheses are tested (e.g., all possible comparisons of means, correlations, proportions, the coefficients in hierarchical models, etc.). By and large, researchers have adopted familywise (FWER) control, though this practice certainly is not universal. Familywise control is intended to deal with the multiplicity issue of computing many tests of significance, yet such control is conservative--that is, less powerful--compared to per test/hypothesis control. The purpose of our article is to introduce the readership, particularly those readers familiar with issues related to controlling Type I errors when many tests of significance are computed, to newer methods that provide protection from the effects of multiple testing, yet are more powerful than familywise controlling methods. Specifically, we introduce a number of procedures that control the k-FWER. These methods--say, 2-FWER instead of 1-FWER (i.e., FWER)--are equivalent to specifying that the probability of 2 or more false rejections is controlled at .05, whereas FWER controls the probability of any (i.e., 1 or more) false rejections at .05. 2-FWER implicitly tolerates 1 false rejection and makes no explicit attempt to control the probability of its occurrence, unlike FWER, which tolerates no false rejections at all. More generally, k-FWER tolerates k - 1 false rejections, but controls the probability of k or more false rejections at α =.05. We demonstrate with two published data sets how more hypotheses can be rejected with k-FWER methods compared to FWER control.

  10. Upright Magnetic Resonance Imaging Tasks in the Knee Osteoarthritis Population: Relationships Between Knee Flexion Angle, Self-Reported Pain, and Performance.

    PubMed

    Gade, Venkata; Allen, Jerome; Cole, Jeffrey L; Barrance, Peter J

    2016-07-01

    To characterize the ability of patients with symptomatic knee osteoarthritis (OA) to perform a weight-bearing activity compatible with upright magnetic resonance imaging (MRI) scanning and how this ability is affected by knee pain symptoms and flexion angles. Cross-sectional observational study assessing effects of knee flexion angle, pain level, and study sequence on accuracy and duration of performing a task used in weight-bearing MRI evaluation. Visual feedback of knee position from an MRI compatible sensor was provided. Pain levels were self-reported on a standardized scale. Simulated MRI setup in a research laboratory. Convenience sample of individuals (N=14; 9 women, 5 men; mean, 69±14y) with symptomatic knee OA. Not applicable. Averaged absolute and signed angle error from target knee flexion for each minute of trial and duration tolerance (the duration that subjects maintained position within a prescribed error threshold). Absolute targeting error increased at longer trial durations (P<.001). Duration tolerance decreased with increasing pain (mean ± SE, no pain: 3min 19s±11s; severe pain: 1min 49s±23s; P=.008). Study sequence affected duration tolerance (first knee: 3min 5s±9.1s; second knee: 2min 19s±9.7s; P=.015). The study provided evidence that weight-bearing MRI evaluations based on imaging protocols in the range of 2 to 3 minutes are compatible with patients reporting mild to moderate knee OA-related pain. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  11. Field errors in hybrid insertion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  12. Sample size determination in combinatorial chemistry.

    PubMed Central

    Zhao, P L; Zambias, R; Bolognese, J A; Boulton, D; Chapman, K

    1995-01-01

    Combinatorial chemistry is gaining wide appeal as a technique for generating molecular diversity. Among the many combinatorial protocols, the split/recombine method is quite popular and particularly efficient at generating large libraries of compounds. In this process, polymer beads are equally divided into a series of pools and each pool is treated with a unique fragment; then the beads are recombined, mixed to uniformity, and redivided equally into a new series of pools for the subsequent couplings. The deviation from the ideal equimolar distribution of the final products is assessed by a special overall relative error, which is shown to be related to the Pearson statistic. Although the split/recombine sampling scheme is quite different from those used in analysis of categorical data, the Pearson statistic is shown to still follow a chi2 distribution. This result allows us to derive the required number of beads such that, with 99% confidence, the overall relative error is controlled to be less than a pregiven tolerable limit L1. In this paper, we also discuss another criterion, which determines the required number of beads so that, with 99% confidence, all individual relative errors are controlled to be less than a pregiven tolerable limit L2 (0 < L2 < 1). PMID:11607586

  13. Using string alignment in a query-by-humming system for real world applications

    NASA Astrophysics Data System (ADS)

    Sailer, Christian

    2005-09-01

    Though query by humming (i.e., retrieving music or information about music by singing a characteristic melody) has been a popular research topic during the past decade, few approaches have reached a level of usefulness beyond mere scientific interest. One of the main problems is the inherent contradiction between error tolerance and dicriminative power in conventional melody matching algorithms that rely on a melody contour approach to handle intonation or transcription errors. Adopting the string matching/alignment techniques from bioinformatics to melody sequences allows to directly assess the similarity between two melodies. This method takes an MPEG-7 compliant melody sequence (i.e., a list of note intervals and length ratios) as query and evaluates the steps necessary to transform it into the reference sequence. By introducing a musically founded cost-of-replace function and an adequate post processing, this method yields a measure for melodic similarity. Thus it is possible to construct a query by humming system that can properly discriminate between thousands of melodies and still be sufficiently error tolerant to be used by untrained singers. The robustness has been verified in extensive tests and real world applications.

  14. An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks

    PubMed Central

    Abba, Sani; Lee, Jeong-A

    2015-01-01

    We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network. PMID:26295236

  15. An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks.

    PubMed

    Abba, Sani; Lee, Jeong-A

    2015-08-18

    We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network.

  16. Ambiguity tolerance in organizations: definitional clarification and perspectives on future research.

    PubMed

    McLain, David L; Kefallonitis, Efstathios; Armani, Kimberly

    2015-01-01

    Ambiguity tolerance is an increasingly popular subject for study in a wide variety of fields. The definition of ambiguity tolerance has changed since its inception, and accompanying that change are changes in measurement and the research questions that interest researchers. There is a wealth of opportunity for research related to ambiguity tolerance and recent advances in neuroscience, measurement, trait research, perception, problem solving, and other fields highlight areas of interest and point to issues that need further attention. The future of ambiguity tolerance research is promising and it is expected that future studies will yield new insights into individual differences in reactions to the complex, unfamiliar, confusing, indeterminate, and incomplete stimuli that fall within the conceptual domain of ambiguity.

  17. Ambiguity tolerance in organizations: definitional clarification and perspectives on future research

    PubMed Central

    McLain, David L.; Kefallonitis, Efstathios; Armani, Kimberly

    2015-01-01

    Ambiguity tolerance is an increasingly popular subject for study in a wide variety of fields. The definition of ambiguity tolerance has changed since its inception, and accompanying that change are changes in measurement and the research questions that interest researchers. There is a wealth of opportunity for research related to ambiguity tolerance and recent advances in neuroscience, measurement, trait research, perception, problem solving, and other fields highlight areas of interest and point to issues that need further attention. The future of ambiguity tolerance research is promising and it is expected that future studies will yield new insights into individual differences in reactions to the complex, unfamiliar, confusing, indeterminate, and incomplete stimuli that fall within the conceptual domain of ambiguity. PMID:25972818

  18. Privacy-Assured Aggregation Protocol for Smart Metering: A Proactive Fault-Tolerant Approach [Proactive Fault-Tolerant Aggregation Protocol for Privacy-Assured Smart Metering

    DOE PAGES

    Won, Jongho; Ma, Chris Y. T.; Yau, David K. Y.; ...

    2016-06-01

    Smart meters are integral to demand response in emerging smart grids, by reporting the electricity consumption of users to serve application needs. But reporting real-time usage information for individual households raises privacy concerns. Existing techniques to guarantee differential privacy (DP) of smart meter users either are not fault tolerant or achieve (possibly partial) fault tolerance at high communication overheads. In this paper, we propose a fault-tolerant protocol for smart metering that can handle general communication failures while ensuring DP with significantly improved efficiency and lower errors compared with the state of the art. Our protocol handles fail-stop faults proactively bymore » using a novel design of future ciphertexts, and distributes trust among the smart meters by sharing secret keys among them. We prove the DP properties of our protocol and analyze its advantages in fault tolerance, accuracy, and communication efficiency relative to competing techniques. We illustrate our analysis by simulations driven by real-world traces of electricity consumption.« less

  19. Privacy-Assured Aggregation Protocol for Smart Metering: A Proactive Fault-Tolerant Approach [Proactive Fault-Tolerant Aggregation Protocol for Privacy-Assured Smart Metering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Won, Jongho; Ma, Chris Y. T.; Yau, David K. Y.

    Smart meters are integral to demand response in emerging smart grids, by reporting the electricity consumption of users to serve application needs. But reporting real-time usage information for individual households raises privacy concerns. Existing techniques to guarantee differential privacy (DP) of smart meter users either are not fault tolerant or achieve (possibly partial) fault tolerance at high communication overheads. In this paper, we propose a fault-tolerant protocol for smart metering that can handle general communication failures while ensuring DP with significantly improved efficiency and lower errors compared with the state of the art. Our protocol handles fail-stop faults proactively bymore » using a novel design of future ciphertexts, and distributes trust among the smart meters by sharing secret keys among them. We prove the DP properties of our protocol and analyze its advantages in fault tolerance, accuracy, and communication efficiency relative to competing techniques. We illustrate our analysis by simulations driven by real-world traces of electricity consumption.« less

  20. VLSI Implementation of Fault Tolerance Multiplier based on Reversible Logic Gate

    NASA Astrophysics Data System (ADS)

    Ahmad, Nabihah; Hakimi Mokhtar, Ahmad; Othman, Nurmiza binti; Fhong Soon, Chin; Rahman, Ab Al Hadi Ab

    2017-08-01

    Multiplier is one of the essential component in the digital world such as in digital signal processing, microprocessor, quantum computing and widely used in arithmetic unit. Due to the complexity of the multiplier, tendency of errors are very high. This paper aimed to design a 2×2 bit Fault Tolerance Multiplier based on Reversible logic gate with low power consumption and high performance. This design have been implemented using 90nm Complemetary Metal Oxide Semiconductor (CMOS) technology in Synopsys Electronic Design Automation (EDA) Tools. Implementation of the multiplier architecture is by using the reversible logic gates. The fault tolerance multiplier used the combination of three reversible logic gate which are Double Feynman gate (F2G), New Fault Tolerance (NFT) gate and Islam Gate (IG) with the area of 160μm x 420.3μm (67.25 mm2). This design achieved a low power consumption of 122.85μW and propagation delay of 16.99ns. The fault tolerance multiplier proposed achieved a low power consumption and high performance which suitable for application of modern computing as it has a fault tolerance capabilities.

  1. Chamber and field evaluations of air pollution tolerances of urban trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karnosky, D.F.

    1981-04-01

    Results are presented for a study of the relative air pollution tolerances of 32 urban-tree cultivars as determined by both chamber fumigations and field exposures. Tolerances to ozone and sulfur dioxide, alone and in combination, were determined using short-term, acute doses administered while the plants were inside a plastic fumigation chamber located inside the Cary Arboretum greenhouses. In a follow-up study still underway, representatives of the same cultivars were outplanted at four locations in the greater New York City area. To date, only oxidant-type injury has been observed on trees in the field plots. Cultivars tolerant to all chamber andmore » field exposures were Acer platanoides Cleveland, Crimson King, Emerald Queen, Jade Glen, and Summershade; Acer rubrum Autumn Flame and Red Sunset; Acer saccharum Green Mountain and Temple's Upright; Fagus sylvatica Rotundifolia; Fraxinus pennsylvanica Summit; and Ginkgo biloba Fastigate and Sentry. Cultivars sensitive to ozone as determined by the chamber and field tests and that may serve as bioindicators of the presence of ozone were Gleditsia triacanthos inermis imperial and Platanus acerifolia Bloodgood.« less

  2. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  3. Effect of altered 'weight' upon animal tolerance to restraint.

    NASA Technical Reports Server (NTRS)

    Burton, R. R.; Smith, A. H.; Beljan, J. R.

    1971-01-01

    The effect of altered weight upon animal tolerance to restraint was determined by simulating various accelerative forces with directed lead weights using restrained and nonrestrained domestic fowl (chickens). Weighting (increased weight) and conterweighting (reduced weight) produced a stressed condition - reduced relative lymphocyte counts, loss of body mass, and/or the development of a disorientation syndrome - in both restrained and nonrestrained (caged only) birds. The animal's tolerance to altered weight appeared to be a function of its body weight. Unrestrained birds were stressed by counterweighting (mean plus or minus standard error) 58.3 plus or minus 41% of their body weight, whereas restrained birds tolerated only 32.2 plus or minus 2.6% reduction in body weight. A training regimen for restrained birds was not effective in improving their tolerance to a reduced weight environment. It was concluded that domestic fowl living in a weightless (space) environment should be restrained minimally and supported by ventrally directed tension equivalent to approximately 50% of their body mass (their weight in a 1 G environment).

  4. An Evaluation of the Applicability of Damage Tolerance to Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Le, Dy; Turnberg, Jay

    2005-01-01

    The Federal Aviation Administration, the National Aeronautics and Space Administration and the aircraft industry have teamed together to develop methods and guidance for the safe life-cycle management of dynamic systems. Based on the success of the United States Air Force damage tolerance initiative for airframe structure, a crack growth based damage tolerance approach is being examined for implementation into the design and management of dynamic systems. However, dynamic systems accumulate millions of vibratory cycles per flight hour, more than 12,000 times faster than an airframe system. If a detectable crack develops in a dynamic system, the time to failure is extremely short, less than 100 flight hours in most cases, leaving little room for error in the material characterization, life cycle analysis, nondestructive inspection and maintenance processes. In this paper, the authors review the damage tolerant design process focusing on uncertainties that affect dynamic systems and evaluate the applicability of damage tolerance on dynamic systems.

  5. Improving furfural tolerance of Zymomonas mobilis by rewiring a sigma factor RpoD protein.

    PubMed

    Tan, Fu-Rong; Dai, Li-Chun; Wu, Bo; Qin, Han; Shui, Zong-Xia; Wang, Jing-Li; Zhu, Qi-Li; Hu, Qi-Chun; Ruan, Zhi-Yong; He, Ming-Xiong

    2015-06-01

    Furfural from lignocellulosic hydrolysates is the key inhibitor for bio-ethanol fermentation. In this study, we report a strategy of improving the furfural tolerance in Zymomonas mobilis on the transcriptional level by engineering its global transcription sigma factor (σ(70), RpoD) protein. Three furfural tolerance RpoD mutants (ZM4-MF1, ZM4-MF2, and ZM4-MF3) were identified from error-prone PCR libraries. The best furfural-tolerance strain ZM4-MF2 reached to the maximal cell density (OD600) about 2.0 after approximately 30 h, while control strain ZM4-rpoD reached its highest cell density of about 1.3 under the same conditions. ZM4-MF2 also consumed glucose faster and yield higher ethanol; expression levels and key Entner-Doudoroff (ED) pathway enzymatic activities were also compared to control strain under furfural stress condition. Our results suggest that global transcription machinery engineering could potentially be used to improve stress tolerance and ethanol production in Z. mobilis.

  6. Study of a unified hardware and software fault-tolerant architecture

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart

    1989-01-01

    A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.

  7. An algorithm for the design and tuning of RF accelerating structures with variable cell lengths

    NASA Astrophysics Data System (ADS)

    Lal, Shankar; Pant, K. K.

    2018-05-01

    An algorithm is proposed for the design of a π mode standing wave buncher structure with variable cell lengths. It employs a two-parameter, multi-step approach for the design of the structure with desired resonant frequency and field flatness. The algorithm, along with analytical scaling laws for the design of the RF power coupling slot, makes it possible to accurately design the structure employing a freely available electromagnetic code like SUPERFISH. To compensate for machining errors, a tuning method has been devised to achieve desired RF parameters for the structure, which has been qualified by the successful tuning of a 7-cell buncher to π mode frequency of 2856 MHz with field flatness <3% and RF coupling coefficient close to unity. The proposed design algorithm and tuning method have demonstrated the feasibility of developing an S-band accelerating structure for desired RF parameters with a relatively relaxed machining tolerance of ∼ 25 μm. This paper discusses the algorithm for the design and tuning of an RF accelerating structure with variable cell lengths.

  8. Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part 2; Applications

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Mach, D. M.; Christian, H. J.; Stewart, M. F.; Bateman, M. G.

    2005-01-01

    The Lagrange multiplier theory and "pitch down method" developed in Part I of this study are applied to complete the calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the method performs well in computer simulations. For mill measurement errors of 1 V/m and a 5 V/m error in the mean fair weather field function, the 3-D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair weather field was also tested using computer simulations. For mill measurement errors of 1 V/m, the method retrieves the 3-D storm field to within an error of about 8% if the fair weather field estimate is typically within 1 V/m of the true fair weather field. Using this side constraint and data from fair weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. The resulting calibration matrix was then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably with the results obtained from earlier calibration analyses that were based on iterative techniques.

  9. Procedural error monitoring and smart checklists

    NASA Technical Reports Server (NTRS)

    Palmer, Everett

    1990-01-01

    Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.

  10. Risk prediction and aversion by anterior cingulate cortex.

    PubMed

    Brown, Joshua W; Braver, Todd S

    2007-12-01

    The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.

  11. BPM CALIBRATION INDEPENDENT LHC OPTICS CORRECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CALAGA,R.; TOMAS, R.; GIOVANNOZZI, M.

    2007-06-25

    The tight mechanical aperture for the LHC imposes severe constraints on both the beta and dispersion beating. Robust techniques to compensate these errors are critical for operation of high intensity beams in the LHC. We present simulations using realistic errors from magnet measurements and alignment tolerances in the presence of BPM noise. Correction reveals that the use of BPM calibration and model independent observables are key ingredients to accomplish optics correction. Experiments at RHIC to verify the algorithms for optics correction are also presented.

  12. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  13. Error-Tolerant Quasi-Paraboloidal Solar Concentrator

    NASA Technical Reports Server (NTRS)

    Wagner, Howard A.

    1988-01-01

    Scalloping reflector surface reduces sensitivity to manufacturing and aiming errors. Contrary to intuition, most effective shape of concentrating reflector for solar heat engine is not perfect paraboloid. According to design studies for Space Station solar concentrator, scalloped, nonimaging approximation to perfect paraboloid offers better overall performance in view of finite apparent size of Sun, imperfections of real equipment, and cost of accommodating these complexities. Scalloped-reflector concept also applied to improve performance while reducing cost of manufacturing and operation of terrestrial solar concentrator.

  14. Hybrid information privacy system: integration of chaotic neural network and RSA coding

    NASA Astrophysics Data System (ADS)

    Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.

    2005-03-01

    Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.

  15. Altitude deviations : breakdowns of an error tolerant system

    DOT National Transportation Integrated Search

    1991-12-01

    This project was a demonstration of the use of live aerial video recorded from a rotary wing aircraft operated by the Fairfax County, Virginia Police Department and transmitted to ground stations for re-transmission and use by Fairfax County and Virg...

  16. Analysis of Soft Error Rates in 65- and 28-nm FD-SOI Processes Depending on BOX Region Thickness and Body Bias by Monte-Carlo Based Simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Kuiyuan; Umehara, Shigehiro; Yamaguchi, Junki; Furuta, Jun; Kobayashi, Kazutoshi

    2016-08-01

    This paper analyzes how body bias and BOX region thickness affect soft error rates in 65-nm SOTB (Silicon on Thin BOX) and 28-nm UTBB (Ultra Thin Body and BOX) FD-SOI processes. Soft errors are induced by alpha-particle and neutron irradiation and the results are then analyzed by Monte Carlo based simulation using PHITS-TCAD. The alpha-particle-induced single event upset (SEU) cross-section and neutron-induced soft error rate (SER) obtained by simulation are consistent with measurement results. We clarify that SERs decreased in response to an increase in the BOX thickness for SOTB while SERs in UTBB are independent of BOX thickness. We also discover SOTB develops a higher tolerance to soft errors when reverse body bias is applied while UTBB become more susceptible.

  17. Robust iterative learning contouring controller with disturbance observer for machine tool feed drives.

    PubMed

    Simba, Kenneth Renny; Bui, Ba Dinh; Msukwa, Mathew Renny; Uchiyama, Naoki

    2018-04-01

    In feed drive systems, particularly machine tools, a contour error is more significant than the individual axial tracking errors from the view point of enhancing precision in manufacturing and production systems. The contour error must be within the permissible tolerance of given products. In machining complex or sharp-corner products, large contour errors occur mainly owing to discontinuous trajectories and the existence of nonlinear uncertainties. Therefore, it is indispensable to design robust controllers that can enhance the tracking ability of feed drive systems. In this study, an iterative learning contouring controller consisting of a classical Proportional-Derivative (PD) controller and disturbance observer is proposed. The proposed controller was evaluated experimentally by using a typical sharp-corner trajectory, and its performance was compared with that of conventional controllers. The results revealed that the maximum contour error can be reduced by about 37% on average. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  18. The NASA F-15 Intelligent Flight Control Systems: Generation II

    NASA Technical Reports Server (NTRS)

    Buschbacher, Mark; Bosworth, John

    2006-01-01

    The Second Generation (Gen II) control system for the F-15 Intelligent Flight Control System (IFCS) program implements direct adaptive neural networks to demonstrate robust tolerance to faults and failures. The direct adaptive tracking controller integrates learning neural networks (NNs) with a dynamic inversion control law. The term direct adaptive is used because the error between the reference model and the aircraft response is being compensated or directly adapted to minimize error without regard to knowing the cause of the error. No parameter estimation is needed for this direct adaptive control system. In the Gen II design, the feedback errors are regulated with a proportional-plus-integral (PI) compensator. This basic compensator is augmented with an online NN that changes the system gains via an error-based adaptation law to improve aircraft performance at all times, including normal flight, system failures, mispredicted behavior, or changes in behavior resulting from damage.

  19. Fade-resistant forward error correction method for free-space optical communications systems

    DOEpatents

    Johnson, Gary W.; Dowla, Farid U.; Ruggiero, Anthony J.

    2007-10-02

    Free-space optical (FSO) laser communication systems offer exceptionally wide-bandwidth, secure connections between platforms that cannot other wise be connected via physical means such as optical fiber or cable. However, FSO links are subject to strong channel fading due to atmospheric turbulence and beam pointing errors, limiting practical performance and reliability. We have developed a fade-tolerant architecture based on forward error correcting codes (FECs) combined with delayed, redundant, sub-channels. This redundancy is made feasible though dense wavelength division multiplexing (WDM) and/or high-order M-ary modulation. Experiments and simulations show that error-free communications is feasible even when faced with fades that are tens of milliseconds long. We describe plans for practical implementation of a complete system operating at 2.5 Gbps.

  20. Dynamic modelling and estimation of the error due to asynchronism in a redundant asynchronous multiprocessor system

    NASA Technical Reports Server (NTRS)

    Huynh, Loc C.; Duval, R. W.

    1986-01-01

    The use of Redundant Asynchronous Multiprocessor System to achieve ultrareliable Fault Tolerant Control Systems shows great promise. The development has been hampered by the inability to determine whether differences in the outputs of redundant CPU's are due to failures or to accrued error built up by slight differences in CPU clock intervals. This study derives an analytical dynamic model of the difference between redundant CPU's due to differences in their clock intervals and uses this model with on-line parameter identification to idenitify the differences in the clock intervals. The ability of this methodology to accurately track errors due to asynchronisity generate an error signal with the effect of asynchronisity removed and this signal may be used to detect and isolate actual system failures.

  1. Cryptographic robustness of a quantum cryptography system using phase-time coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2008-01-15

    A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less

  2. Realistic noise-tolerant randomness amplification using finite number of devices.

    PubMed

    Brandão, Fernando G S L; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-04-21

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology.

  3. Realistic noise-tolerant randomness amplification using finite number of devices

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-04-01

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology.

  4. Integral Sliding Mode Fault-Tolerant Control for Uncertain Linear Systems Over Networks With Signals Quantization.

    PubMed

    Hao, Li-Ying; Park, Ju H; Ye, Dan

    2017-09-01

    In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.

  5. Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.

    1984-01-01

    SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.

  6. Realistic noise-tolerant randomness amplification using finite number of devices

    PubMed Central

    Brandão, Fernando G. S. L.; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-01-01

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology. PMID:27098302

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Sisi; Li, Yun; Levitt, Karl N.

    Consensus is a fundamental approach to implementing fault-tolerant services through replication where there exists a tradeoff between the cost and the resilience. For instance, Crash Fault Tolerant (CFT) protocols have a low cost but can only handle crash failures while Byzantine Fault Tolerant (BFT) protocols handle arbitrary failures but have a higher cost. Hybrid protocols enjoy the benefits of both high performance without failures and high resiliency under failures by switching among different subprotocols. However, it is challenging to determine which subprotocols should be used. We propose a moving target approach to switch among protocols according to the existing systemmore » and network vulnerability. At the core of our approach is a formalized cost model that evaluates the vulnerability and performance of consensus protocols based on real-time Intrusion Detection System (IDS) signals. Based on the evaluation results, we demonstrate that a safe, cheap, and unpredictable protocol is always used and a high IDS error rate can be tolerated.« less

  8. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  9. Development of Novel Glyphosate-Tolerant Japonica Rice Lines: A Step Toward Commercial Release.

    PubMed

    Cui, Ying; Huang, Shuqing; Liu, Ziduo; Yi, Shuyuan; Zhou, Fei; Chen, Hao; Lin, Yongjun

    2016-01-01

    Glyphosate is the most widely used herbicide for its low cost and high efficiency. However, it is rarely applied directly in rice field due to its toxicity to rice. Therefore, glyphosate-tolerant rice can greatly decrease the cost of rice production and provide a more effective weed management strategy. Although, several approaches to develop transgenic rice with glyphosate tolerance have been reported, the agronomic performances of these plants have not been well evaluated, and the feasibility of commercial production has not been confirmed yet. Here, a novel glyphosate-tolerant gene cloned from the bacterium Isoptericola variabilis was identified, codon optimized (designated as I. variabilis-EPSPS (*)), and transferred into Zhonghua11, a widely used japonica rice cultivar. After systematic analysis of the transgene integration via PCR, Southern blot and flanking sequence isolation, three transgenic lines with only one intact I. variabilis-EPSPS (*) expression cassette integrated into intergenic regions were identified. Seed test results showed that the glyphosate tolerance of the transgenic rice was about 240 times that of wild type on plant medium. The glyphosate tolerance of transgenic rice lines was further evaluated based on comprehensive agronomic performances in the field with T3 and T5generations in a 2-year assay, which showed that they were rarely affected by glyphosate even when the dosage was 8400 g ha(-1). To our knowledge, this is the first demonstration of the development of glyphosate-tolerant rice lines based on a comprehensive analysis of agronomic performances in the field. Taken together, the results suggest that the selected glyphosate-tolerant rice lines are highly tolerant to glyphosate and have the possibility of commercial release. I. variabilis-EPSPS (*) also can be a promising candidate gene in other species for developing glyphosate-tolerant crops.

  10. A Unified Fault-Tolerance Protocol

    NASA Technical Reports Server (NTRS)

    Miner, Paul; Gedser, Alfons; Pike, Lee; Maddalon, Jeffrey

    2004-01-01

    Davies and Wakerly show that Byzantine fault tolerance can be achieved by a cascade of broadcasts and middle value select functions. We present an extension of the Davies and Wakerly protocol, the unified protocol, and its proof of correctness. We prove that it satisfies validity and agreement properties for communication of exact values. We then introduce bounded communication error into the model. Inexact communication is inherent for clock synchronization protocols. We prove that validity and agreement properties hold for inexact communication, and that exact communication is a special case. As a running example, we illustrate the unified protocol using the SPIDER family of fault-tolerant architectures. In particular we demonstrate that the SPIDER interactive consistency, distributed diagnosis, and clock synchronization protocols are instances of the unified protocol.

  11. Field validation of secondary data sources: a novel measure of representativity applied to a Canadian food outlet database

    PubMed Central

    2013-01-01

    Background Validation studies of secondary datasets used to characterize neighborhood food businesses generally evaluate how accurately the database represents the true situation on the ground. Depending on the research objectives, the characterization of the business environment may tolerate some inaccuracies (e.g. minor imprecisions in location or errors in business names). Furthermore, if the number of false negatives (FNs) and false positives (FPs) is balanced within a given area, one could argue that the database still provides a “fair” representation of existing resources in this area. Yet, traditional validation measures do not relax matching criteria, and treat FNs and FPs independently. Through the field validation of food businesses found in a Canadian database, this paper proposes alternative criteria for validity. Methods Field validation of the 2010 Enhanced Points of Interest (EPOI) database (DMTI Spatial®) was performed in 2011 in 12 census tracts (CTs) in Montreal, Canada. Some 410 food outlets were extracted from the database and 484 were observed in the field. First, traditional measures of sensitivity and positive predictive value (PPV) accounting for every single mismatch between the field and the database were computed. Second, relaxed measures of sensitivity and PPV that tolerate mismatches in business names or slight imprecisions in location were assessed. A novel measure of representativity that further allows for compensation between FNs and FPs within the same business category and area was proposed. Representativity was computed at CT level as ((TPs +|FPs-FNs|)/(TPs+FNs)), with TPs meaning true positives, and |FPs-FNs| being the absolute value of the difference between the number of FNs and the number of FPs within each outlet category. Results The EPOI database had a "moderate" capacity to detect an outlet present in the field (sensitivity: 54.5%) or to list only the outlets that actually existed in the field (PPV: 64.4%). Relaxed measures of sensitivity and PPV were respectively 65.5% and 77.3%. The representativity of the EPOI database was 77.7%. Conclusions The novel measure of representativity might serve as an alternative to traditional validity measures, and could be more appropriate in certain situations, depending on the nature and scale of the research question. PMID:23782570

  12. Exploiting data representation for fault tolerance

    DOE PAGES

    Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...

    2015-01-06

    Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less

  13. A New Methodology for Vibration Error Compensation of Optical Encoders

    PubMed Central

    Lopez, Jesus; Artes, Mariano

    2012-01-01

    Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new “ad hoc” methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained. PMID:22666067

  14. Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael

    2009-01-01

    Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.

  15. Reprocessing the GRACE-derived gravity field time series based on data-driven method for ocean tide alias error mitigation

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Sneeuw, Nico; Jiang, Weiping

    2017-04-01

    GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.

  16. Software fault tolerance using data diversity

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    1991-01-01

    Research on data diversity is discussed. Data diversity relies on a different form of redundancy from existing approaches to software fault tolerance and is substantially less expensive to implement. Data diversity can also be applied to software testing and greatly facilitates the automation of testing. Up to now it has been explored both theoretically and in a pilot study, and has been shown to be a promising technique. The effectiveness of data diversity as an error detection mechanism and the application of data diversity to differential equation solvers are discussed.

  17. Using field data to assess the tolerance of freshwater fish to elevated ionic concentrations

    EPA Science Inventory

    We used field data of fish occurrences and specific conductivity to assess the tolerance of freshwater fish to elevated ions. The concentration at which a species was expected to no longer be observed [the extirpation concentration (XC95)] was identified from the 95th percentile ...

  18. Retrieving Storm Electric Fields from Aircrfaft Field Mill Data: Part II: Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Mach, D. M.; Christian H. J.; Stewart, M. F.; Bateman M. G.

    2006-01-01

    The Lagrange multiplier theory developed in Part I of this study is applied to complete a relative calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the Lagrange multiplier method performs well in computer simulations. For mill measurement errors of 1 V m(sup -1) and a 5 V m(sup -1) error in the mean fair-weather field function, the 3D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair-weather field was also tested using computer simulations. For mill measurement errors of 1 V m(sup -l), the method retrieves the 3D storm field to within an error of about 8% if the fair-weather field estimate is typically within 1 V m(sup -1) of the true fair-weather field. Using this type of side constraint and data from fair-weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. Absolute calibration was completed using the pitch down method developed in Part I, and conventional analyses. The resulting calibration matrices were then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably in many respects with results derived from earlier (iterative) techniques of calibration.

  19. Genome-wide identification of gene expression in contrasting maize inbred lines under field drought conditions reveals the significance of transcription factors in drought tolerance

    PubMed Central

    Zhang, Xiaojing; Liu, Xuyang; Zhang, Dengfeng; Tang, Huaijun; Sun, Baocheng; Li, Chunhui; Hao, Luyang; Liu, Cheng; Li, Yongxiang; Shi, Yunsu; Xie, Xiaoqing; Song, Yanchun; Wang, Tianyu; Li, Yu

    2017-01-01

    Drought is a major threat to maize growth and production. Understanding the molecular regulation network of drought tolerance in maize is of great importance. In this study, two maize inbred lines with contrasting drought tolerance were tested in the field under natural soil drought and well-watered conditions. In addition, the transcriptomes of their leaves was analyzed by RNA-Seq. In total, 555 and 2,558 genes were detected to specifically respond to drought in the tolerant and the sensitive line, respectively, with a more positive regulation tendency in the tolerant genotype. Furthermore, 4,700, 4,748, 4,403 and 4,288 genes showed differential expression between the two lines under moderate drought, severe drought and their well-watered controls, respectively. Transcription factors were enriched in both genotypic differentially expressed genes and specifically responsive genes of the tolerant line. It was speculated that the genotype-specific response of 20 transcription factors in the tolerance line and the sustained genotypically differential expression of 22 transcription factors might enhance tolerance to drought in maize. Our results provide new insight into maize drought tolerance-related regulation systems and provide gene resources for subsequent studies and drought tolerance improvement. PMID:28700592

  20. Fault-tolerance in Two-dimensional Topological Systems

    NASA Astrophysics Data System (ADS)

    Anderson, Jonas T.

    This thesis is a collection of ideas with the general goal of building, at least in the abstract, a local fault-tolerant quantum computer. The connection between quantum information and topology has proven to be an active area of research in several fields. The introduction of the toric code by Alexei Kitaev demonstrated the usefulness of topology for quantum memory and quantum computation. Many quantum codes used for quantum memory are modeled by spin systems on a lattice, with operators that extract syndrome information placed on vertices or faces of the lattice. It is natural to wonder whether the useful codes in such systems can be classified. This thesis presents work that leverages ideas from topology and graph theory to explore the space of such codes. Homological stabilizer codes are introduced and it is shown that, under a set of reasonable assumptions, any qubit homological stabilizer code is equivalent to either a toric code or a color code. Additionally, the toric code and the color code correspond to distinct classes of graphs. Many systems have been proposed as candidate quantum computers. It is very desirable to design quantum computing architectures with two-dimensional layouts and low complexity in parity-checking circuitry. Kitaev's surface codes provided the first example of codes satisfying this property. They provided a new route to fault tolerance with more modest overheads and thresholds approaching 1%. The recently discovered color codes share many properties with the surface codes, such as the ability to perform syndrome extraction locally in two dimensions. Some families of color codes admit a transversal implementation of the entire Clifford group. This work investigates color codes on the 4.8.8 lattice known as triangular codes. I develop a fault-tolerant error-correction strategy for these codes in which repeated syndrome measurements on this lattice generate a three-dimensional space-time combinatorial structure. I then develop an integer program that analyzes this structure and determines the most likely set of errors consistent with the observed syndrome values. I implement this integer program to find the threshold for depolarizing noise on small versions of these triangular codes. Because the threshold for magic-state distillation is likely to be higher than this value and because logical CNOT gates can be performed by code deformation in a single block instead of between pairs of blocks, the threshold for fault-tolerant quantum memory for these codes is also the threshold for fault-tolerant quantum computation with them. Since the advent of a threshold theorem for quantum computers much has been improved upon. Thresholds have increased, architectures have become more local, and gate sets have been simplified. The overhead for magic-state distillation has been studied, but not nearly to the extent of the aforementioned topics. A method for greatly reducing this overhead, known as reusable magic states, is studied here. While examples of reusable magic states exist for Clifford gates, I give strong reasons to believe they do not exist for non-Clifford gates.

  1. Interferometric Techniques for Gravitational Wave Detection in Space

    NASA Technical Reports Server (NTRS)

    Stebbins, Robin T; Bender, Peter L.

    2000-01-01

    The Laser Interferometer Space Antenna (LISA) mission will detect gravitational waves from galactic and extragalactic sources, most importantly those involving supermassive black holes. The primary goal of this project is to investigate stability and robustness issues associated with LISA interferometry. We specifically propose to study systematic errors arising from: optical misalignments, optical surface errors, thermal effects and pointing tolerances. This report covers the first fiscal year of the grant, from January 1st to December 31st 1999. We have employed an optical modeling tool to evaluate the effect of misplaced and misaligned optical components. Preliminary results seem to indicate that positional tolerances of one micron and angular tolerances of 0.6 millirad produce no significant effect on the achievable contrast of the interference pattern. This report also outlines research plans for the second fiscal year of the grant, from January 1st to December 31st 2000. Since the work under NAG5-6880 has gone more rapidly than projected, our test bed interferometer is operational, and can be used for measurements of effects that cause beam motion. Hence, we will design, build and characterize a sensor for measuring beam motion, and then install it. We are also planning a differential wavefront sensor based on a quadrant photodiode as a first generation sensor.

  2. An experimental evaluation of the REE SIFT environment for spaceborne applications

    NASA Technical Reports Server (NTRS)

    Whistnant, K.; Iyer, R. K.; Jones, P.; Some, R.; Rennels, D.

    2002-01-01

    This paper presents an experimental evaluation of a software-implemented fault tolerance environment built around a set of self-checking ARMOR proceses running on different machines that provide error detection and recovery services to themselves and to spaceborne scientific applications.

  3. Columbus safety and reliability

    NASA Astrophysics Data System (ADS)

    Longhurst, F.; Wessels, H.

    1988-10-01

    Analyses carried out to ensure Columbus reliability, availability, and maintainability, and operational and design safety are summarized. Failure modes/effects/criticality is the main qualitative tool used. The main aspects studied are fault tolerance, hazard consequence control, risk minimization, human error effects, restorability, and safe-life design.

  4. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  5. Validation diagnostics for defective thermocouple circuits

    NASA Astrophysics Data System (ADS)

    Reed, R. P.

    Thermocouples, properly used under favorable conditions, can measure temperature with an accepted tolerance. However, when improperly applied or exposed to hostile mechanical, chemical, thermal, or radiation environments, they often fail without the error being evident in the temperature record. Conversely, features that appear to be unreasonable in temperature records can be authentic. When hidden failure occurs during measurement, deliberate recording of supplementary information is necessary to distinguish valid from faulty data. Loop resistance change, circuit isolation, isolated noise potential, and other measures can reveal symptoms of developing defects. Monitored continually along with temperature, they can reveal the occurrence, location, and natures of damage incurred during measurement. Special multiterminal branched thermocouple circuits and combinatorial multiplex switching allow detection of dc measurement noise and decalibration. Symptoms of insidious failure, often consequential, are illustrated by examples from field experience in measuring temperature of a propagating retorting front in underground coal gasification.

  6. TeO2 slow surface acoustic wave Bragg cell

    NASA Astrophysics Data System (ADS)

    Yao, Shi-Kay

    1991-08-01

    A newly discovered slow acoustic surface wave (SAW) on a (-110) cut TeO2 surface is reported focusing on its properties studied using a PC based numerical method. It is concluded that the slow SAW is rather tolerant to crystal surface orientation errors and has unusually deep penetration of its shear component into the thickness of substrate, about 47 wavelengths for a half amplitude point. The deep shear field is considered to be beneficial for surface acoustooptic interaction with free propagating focused laser beams. Rotation of the substrate about the z-axis makes it possible to adjust a slow SAW velocity with the potential advantage of trading acoustic velocity for less acoustic attenuation. Wider-bandwidth long signal processing time Bragg cells may be feasible utilizing this trade-off. The slow SAW device is characterized by an extremely low power consumption which might be useful for compact portable or avionics signal processing equipment applications.

  7. Software Architecture of Sensor Data Distribution In Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Lee, Charles; Alena, Richard; Stone, Thom; Ossenfort, John; Walker, Ed; Notario, Hugo

    2006-01-01

    Data from mobile and stationary sensors will be vital in planetary surface exploration. The distribution and collection of sensor data in an ad-hoc wireless network presents a challenge. Irregular terrain, mobile nodes, new associations with access points and repeaters with stronger signals as the network reconfigures to adapt to new conditions, signal fade and hardware failures can cause: a) Data errors; b) Out of sequence packets; c) Duplicate packets; and d) Drop out periods (when node is not connected). To mitigate the effects of these impairments, a robust and reliable software architecture must be implemented. This architecture must also be tolerant of communications outages. This paper describes such a robust and reliable software infrastructure that meets the challenges of a distributed ad hoc network in a difficult environment and presents the results of actual field experiments testing the principles and actual code developed.

  8. Design of Energy Storage Management System Based on FPGA in Micro-Grid

    NASA Astrophysics Data System (ADS)

    Liang, Yafeng; Wang, Yanping; Han, Dexiao

    2018-01-01

    Energy storage system is the core to maintain the stable operation of smart micro-grid. Aiming at the existing problems of the energy storage management system in the micro-grid such as Low fault tolerance, easy to cause fluctuations in micro-grid, a new intelligent battery management system based on field programmable gate array is proposed : taking advantage of FPGA to combine the battery management system with the intelligent micro-grid control strategy. Finally, aiming at the problem that during estimation of battery charge State by neural network, initialization of weights and thresholds are not accurate leading to large errors in prediction results, the genetic algorithm is proposed to optimize the neural network method, and the experimental simulation is carried out. The experimental results show that the algorithm has high precision and provides guarantee for the stable operation of micro-grid.

  9. Pharmacogenetics and outcome with antipsychotic drugs.

    PubMed

    Pouget, Jennie G; Shams, Tahireh A; Tiwari, Arun K; Müller, Daniel J

    2014-12-01

    Antipsychotic medications are the gold-standard treatment for schizophrenia, and are often prescribed for other mental conditions. However, the efficacy and side-effect profiles of these drugs are heterogeneous, with large interindividual variability. As a result, treatment selection remains a largely trial-and-error process, with many failed treatment regimens endured before finding a tolerable balance between symptom management and side effects. Much of the interindividual variability in response and side effects is due to genetic factors (heritability, h(2)~ 0.60-0.80). Pharmacogenetics is an emerging field that holds the potential to facilitate the selection of the best medication for a particular patient, based on his or her genetic information. In this review we discuss the most promising genetic markers of antipsychotic treatment outcomes, and present current translational research efforts that aim to bring these pharmacogenetic findings to the clinic in the near future.

  10. Pharmacogenetics and outcome with antipsychotic drugs

    PubMed Central

    Pouget, Jennie G.; Shams, Tahireh A.; Tiwari, Arun K.; Müller, Daniel J.

    2014-01-01

    Antipsychotic medications are the gold-standard treatment for schizophrenia, and are often prescribed for other mental conditions. However, the efficacy and side-effect profiles of these drugs are heterogeneous, with large interindividual variability. As a result, treatment selection remains a largely trial-and-error process, with many failed treatment regimens endured before finding a tolerable balance between symptom management and side effects. Much of the interindividual variability in response and side effects is due to genetic factors (heritability, h2~ 0.60-0.80). Pharmacogenetics is an emerging field that holds the potential to facilitate the selection of the best medication for a particular patient, based on his or her genetic information. In this review we discuss the most promising genetic markers of antipsychotic treatment outcomes, and present current translational research efforts that aim to bring these pharmacogenetic findings to the clinic in the near future. PMID:25733959

  11. Spin Tracking of Polarized Protons in the Main Injector at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, M.; Lorenzon, W.; Aldred, C.

    2016-07-01

    The Main Injector (MI) at Fermilab currently produces high-intensity beams of protons at energies of 120 GeV for a variety of physics experiments. Acceleration of polarized protons in the MI would provide opportunities for a rich spin physics program at Fermilab. To achieve polarized proton beams in the Fermilab accelerator complex, shown in Fig.1.1, detailed spin tracking simulations with realistic parameters based on the existing facility are required. This report presents studies at the MI using a single 4-twist Si-berian snake to determine the depolarizing spin resonances for the relevant synchrotrons. Results will be presented first for a perfect MImore » lattice, followed by a lattice that includes the real MI imperfections, such as the measured magnet field errors and quadrupole misalignments. The tolerances of each of these factors in maintaining polariza-tion in the Main Injector will be discussed.« less

  12. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    PubMed

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  13. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    PubMed Central

    Power, H.

    2017-01-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974

  14. 7 CFR 802.0 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... requirements set forth in this part 802 describe certain specifications, tolerances, and other technical... Standards and Technology (NIST) Handbook 44, “Specifications, Tolerances, and Other Technical Requirements...), “Specifications and Tolerances for Reference Standards and Field Standard Weights and Measures,” (Handbook 105-1...

  15. Competitive ability, stress tolerance and plant interactions along stress gradients.

    PubMed

    Qi, Man; Sun, Tao; Xue, SuFeng; Yang, Wei; Shao, DongDong; Martínez-López, Javier

    2018-04-01

    Exceptions to the generality of the stress-gradient hypothesis (SGH) may be reconciled by considering species-specific traits and stress tolerance strategies. Studies have tested stress tolerance and competitive ability in mediating interaction outcomes, but few have incorporated this to predict how species interactions shift between competition and facilitation along stress gradients. We used field surveys, salt tolerance and competition experiments to develop a predictive model interspecific interaction shifts across salinity stress gradients. Field survey and greenhouse tolerance tests revealed tradeoffs between stress tolerance and competitive ability. Modeling showed that along salinity gradients, (1) plant interactions shifted from competition to facilitation at high salinities within the physiological limits of salt-intolerant plants, (2) facilitation collapsed when salinity stress exceeded the physiological tolerance of salt-intolerant plants, and (3) neighbor removal experiments overestimate interspecific facilitation by including intraspecific effects. A community-level field experiment, suggested that (1) species interactions are competitive in benign and, facilitative in harsh condition, but fuzzy under medium environmental stress due to niche differences of species and weak stress amelioration, and (2) the SGH works on strong but not weak stress gradients, so SGH confusion arises when it is applied across questionable stress gradients. Our study clarifies how species interactions vary along stress gradients. Moving forward, focusing on SGH applications rather than exceptions on weak or nonexistent gradients would be most productive. © 2018 by the Ecological Society of America.

  16. High-Threshold Low-Overhead Fault-Tolerant Classical Computation and the Replacement of Measurements with Unitary Quantum Gates.

    PubMed

    Cruikshank, Benjamin; Jacobs, Kurt

    2017-07-21

    von Neumann's classic "multiplexing" method is unique in achieving high-threshold fault-tolerant classical computation (FTCC), but has several significant barriers to implementation: (i) the extremely complex circuits required by randomized connections, (ii) the difficulty of calculating its performance in practical regimes of both code size and logical error rate, and (iii) the (perceived) need for large code sizes. Here we present numerical results indicating that the third assertion is false, and introduce a novel scheme that eliminates the two remaining problems while retaining a threshold very close to von Neumann's ideal of 1/6. We present a simple, highly ordered wiring structure that vastly reduces the circuit complexity, demonstrates that randomization is unnecessary, and provides a feasible method to calculate the performance. This in turn allows us to show that the scheme requires only moderate code sizes, vastly outperforms concatenation schemes, and under a standard error model a unitary implementation realizes universal FTCC with an accuracy threshold of p<5.5%, in which p is the error probability for 3-qubit gates. FTCC is a key component in realizing measurement-free protocols for quantum information processing. In view of this, we use our scheme to show that all-unitary quantum circuits can reproduce any measurement-based feedback process in which the asymptotic error probabilities for the measurement and feedback are (32/63)p≈0.51p and 1.51p, respectively.

  17. Demonstrating Starshade Performance as Part of NASA's Technology Development for Exoplanet Missions

    NASA Astrophysics Data System (ADS)

    Kasdin, N. Jeremy; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M. W.; Walkemeyer, P. E.; Bach, V. M.; Oakes, E.; Cady, E. J.; Martin, S. R.; Marchen, L. F.; Macintosh, B.; Rudd, R.; Mikula, J. A.; Lynch, D. H.

    2012-01-01

    In this poster we describe the results of our project to design, manufacture, and measure a prototype starshade petal as part of the Technology Development for Exoplanet Missions program. An external occult is a satellite employing a large screen, or starshade,that flies in formation with a spaceborne telescope to provide the starlight suppression needed for detecting and characterizing exoplanets. Among the advantages of using an occulter are the broadband allowed for characterization and the removal of light for the observatory, greatly relaxing the requirements on the telescope and instrument. In this first two-year phase we focused on the key requirement of manufacturing a precision petal with the precise tolerances needed to meet the overall error budget. These tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We show the results of this analysis and a representative error budget. We also present the final manufactured occulter petal and the metrology on its shape that demonstrates it meets requirements. We show that a space occulter built of petals with the same measured shape would achieve better than 1e-9 contrast. We also show our progress in building and testing sample edges with the sharp radius of curvature needed for limiting solar glint. Finally, we describe our plans for the second TDEM phase.

  18. Quality Assurance Results for a Commercial Radiosurgery System: A Communication.

    PubMed

    Ruschin, Mark; Lightstone, Alexander; Beachey, David; Wronski, Matt; Babic, Steven; Yeboah, Collins; Lee, Young; Soliman, Hany; Sahgal, Arjun

    2015-10-01

    The purpose of this communication is to inform the radiosurgery community of quality assurance (QA) results requiring attention in a commercial FDA-approved linac-based cone stereo-tactic radiosurgery (SRS) system. Standard published QA guidelines as per the American Association of Physics in Medicine (AAPM) were followed during the SRS system's commissioning process including end-to-end testing, cone concentricity testing, image transfer verification, and documentation. Several software and hardware deficiencies that were deemed risky were uncovered during the process and QA processes were put in place to mitigate these risks during clinical practice. In particular, the present work focuses on daily cone concentricity testing and commissioning-related findings associated with the software. Cone concentricity/alignment is measured daily using both optical light field inspection, as well as quantitative radiation field tests with the electronic portal imager. In 10 out of 36 clini-cal treatments, adjustments to the cone position had to be made to align the cone with the collimator axis to less than 0.5 mm and on two occasions the pre-adjustment measured offset was 1.0 mm. Software-related errors discovered during commissioning included incorrect transfer of the isocentre in DICOM coordinates, improper handling of non-axial image sets, and complex handling of beam data, especially for multi-target treatments. QA processes were established to mitigate the occurrence of the software errors. With proper QA processes, the reported SRS system complies with tolerances set out in established guidelines. Discussions with the vendor are ongoing to address some of the hardware issues related to cone alignment. © The Author(s) 2014.

  19. Linear-quadratic-Gaussian synthesis with reduced parameter sensitivity

    NASA Technical Reports Server (NTRS)

    Lin, J. Y.; Mingori, D. L.

    1992-01-01

    We present a method for improving the tolerance of a conventional LQG controller to parameter errors in the plant model. The improvement is achieved by introducing additional terms reflecting the structure of the parameter errors into the LQR cost function, and also the process and measurement noise models. Adjusting the sizes of these additional terms permits a trade-off between robustness and nominal performance. Manipulation of some of the additional terms leads to high gain controllers while other terms lead to low gain controllers. Conditions are developed under which the high-gain approach asymptotically recovers the robustness of the corresponding full-state feedback design, and the low-gain approach makes the closed-loop poles asymptotically insensitive to parameter errors.

  20. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    NASA Astrophysics Data System (ADS)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-01

    This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.

  1. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE PAGES

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-21

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  2. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  3. Pushing particles in extreme fields

    NASA Astrophysics Data System (ADS)

    Gordon, Daniel F.; Hafizi, Bahman; Palastro, John

    2017-03-01

    The update of the particle momentum in an electromagnetic simulation typically employs the Boris scheme, which has the advantage that the magnetic field strictly performs no work on the particle. In an extreme field, however, it is found that onerously small time steps are required to maintain accuracy. One reason for this is that the operator splitting scheme fails. In particular, even if the electric field impulse and magnetic field rotation are computed exactly, a large error remains. The problem can be analyzed for the case of constant, but arbitrarily polarized and independent electric and magnetic fields. The error can be expressed in terms of exponentials of nested commutators of the generators of boosts and rotations. To second order in the field, the Boris scheme causes the error to vanish, but to third order in the field, there is an error that has to be controlled by decreasing the time step. This paper introduces a scheme that avoids this problem entirely, while respecting the property that magnetic fields cannot change the particle energy.

  4. Magnetic Field Measurements of the Spotted Yellow Dwarf DE Boo During 2001-2004

    NASA Astrophysics Data System (ADS)

    Plachinda, S.; Baklanova, D.; Butkovskaya, V.; Pankov, N.

    2017-06-01

    Spectropolarimetric observations of DE Boo have been performed at Crimean astrophysical observatory during 18 nights in 2001-2004. We present the result of the longitudinal magnetic field measurements on this star. The magnetic field varies from +44 G to -36 G with mean Standard Error (SE) of 8.2 G. For full array of the magnetic field measurements the difference between experimental errors and Monte Carlo errors is not statistically significant.

  5. Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields

    NASA Astrophysics Data System (ADS)

    Bettadpur, S.

    2012-04-01

    The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.

  6. Radiation tolerant compact image sensor using CdTe photodiode and field emitter array (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Masuzawa, Tomoaki; Neo, Yoichiro; Mimura, Hidenori; Okamoto, Tamotsu; Nagao, Masayoshi; Akiyoshi, Masafumi; Sato, Nobuhiro; Takagi, Ikuji; Tsuji, Hiroshi; Gotoh, Yasuhito

    2016-10-01

    A growing demand on incident detection is recognized since the Great East Japan Earthquake and successive accidents in Fukushima nuclear power plant in 2011. Radiation tolerant image sensors are powerful tools to collect crucial information at initial stages of such incidents. However, semiconductor based image sensors such as CMOS and CCD have limited tolerance to radiation exposure. Image sensors used in nuclear facilities are conventional vacuum tubes using thermal cathodes, which have large size and high power consumption. In this study, we propose a compact image sensor composed of a CdTe-based photodiode and a matrix-driven Spindt-type electron beam source called field emitter array (FEA). A basic principle of FEA-based image sensors is similar to conventional Vidicon type camera tubes, but its electron source is replaced from a thermal cathode to FEA. The use of a field emitter as an electron source should enable significant size reduction while maintaining high radiation tolerance. Current researches on radiation tolerant FEAs and development of CdTe based photoconductive films will be presented.

  7. Bee genera, diversity and abundance in genetically modified canola fields.

    PubMed

    O'Brien, Colton; Arathi, H S

    2018-01-02

    Intensive agricultural practices resulting in large scale habitat loss ranks as the top contributing factors in the global bee decline. Growing Genetically Modified Herbicide Tolerant (GMHT) crops as large monocultures has resulted extensive applications of herbicides leading to the degradation of natural habitats surrounding farmlands. Herbicide tolerance trait is beneficial for crops such as Canola (Brassica napus) that are extremely vulnerable to weed competition. While the trait in itself does not harm pollinators, growing genetically modified herbicide tolerant cultivars indirectly contributes towards pollinator declines through habitat loss. Canola, a mass-flowering crop is highly attractive to bee pollinators and the extensive adoption of the herbicide tolerant trait has led to depletion of non-crop floral resources. Extensive use of herbicide in and near fields with herbicide tolerant cultivars systematically eliminates semi-natural habitats around agricultural fields which consist of non-crop flowering plants. Planting pollinator strips provides floral resources for bees after crop flowering. We document the bee genera in canola and the adjoining pollinator strip. The overlap in bee genera reinforces the importance of pollinator habitats in agricultural landscape.

  8. Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers

    NASA Technical Reports Server (NTRS)

    Ha, Eunho; North, Gerald R.

    1995-01-01

    Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

  9. Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D

    NASA Astrophysics Data System (ADS)

    La Haye, R. J.; Paz-Soldan, C.; Strait, E. J.

    2015-02-01

    DIII-D experiments show that fully penetrated resonant n = 1 error field locked modes in ohmic plasmas with safety factor q95 ≳ 3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n = 2/1) static error fields are shielded in ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption. Error field correction (EFC) is performed on DIII-D (in ITER relevant shape and safety factor q95 ≳ 3) with either the n = 1 C-coil (no handedness) or the n = 1 I-coil (with ‘dominantly’ resonant field pitch). Despite EFC, which allows significantly lower plasma density (a ‘figure of merit’) before penetration occurs, the resulting saturated islands have similar large size; they differ only in the phase of the locked mode after typically being pulled (by up to 30° toroidally) in the electron diamagnetic drift direction as they grow to saturation. Island amplification and phase shift are explained by a second change-of-state in which the classical tearing index changes from stable to marginal by the presence of the island, which changes the current density profile. The eventual island size is thus governed by the inherent stability and saturation mechanism rather than the driving error field.

  10. Site‐specific tolerance tables and indexing device to improve patient setup reproducibility

    PubMed Central

    James, Joshua A.; Cetnar, Ashley J.; McCullough, Mark A.; Wang, Brian

    2015-01-01

    While the implementation of tools such as image‐guidance and immobilization devices have helped to prevent geometric misses in radiation therapy, many treatments remain prone to error if these items are not available, not utilized for every fraction, or are misused. The purpose of this project is to design a set of site‐specific treatment tolerance tables to be applied to the treatment couch for use in a record and verify (R&V) system that will insure accurate patient setup with minimal workflow interruption. This project also called for the construction of a simple indexing device to help insure reproducible patient setup for patients that could not be indexed with existing equipment. The tolerance tables were created by retrospective analysis on a total of 66 patients and 1,308 treatments, separating them into five categories based on disease site: lung, head and neck (H&N), breast, pelvis, and abdomen. Couch parameter tolerance tables were designed to encompass 95% of treatments, and were generated by calculating the standard deviation of couch vertical, longitudinal, and lateral values using the first day of treatment as a baseline. We also investigated an alternative method for generating the couch tolerances by updating the baseline values when patient position was verified with image guidance. This was done in order to adapt the tolerances to any gradual changes in patient setup that would not correspond with a mistreatment. The tolerance tables and customizable indexing device were then implemented for a trial period in order to determine the feasibility of the system. During this trial period we collected data from 1,054 fractions from 65 patients. We then analyzed the number of treatments that would have been out of tolerance, as well as whether or not the tolerances or setup techniques should be adjusted. When the couch baseline values were updated with every imaging fraction, the average rate of tolerance violations was 10% for the lung, H&N, abdomen, and pelvis treatments. Using the indexing device, tolerances for patients with pelvic disease decreased (e.g., from 5.3 cm to 4.3 cm longitudinally). Unfortunately, the results from breast patients were highly variable due to the complexity of the setup technique, making the couch an inadequate surrogate for measuring setup accuracy. In summary, we have developed a method to turn the treatment couch parameters within the R&V system into a useful alert tool, which can be implemented at other institutions, in order to identify potential errors in patient setup. PACS numbers: 87.53Kn, 87.55.kh, 87.55.ne, 87.55.km, 87.55K‐, 87.55.Qr PMID:26103475

  11. First measurements of error fields on W7-X using flux surface mapping

    DOE PAGES

    Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; ...

    2016-08-03

    Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field 'more » $${\\rlap{-}\\ \\iota} =1/2$$ ' magnetic configuration ($${\\rlap{-}\\ \\iota} =\\iota /2\\pi $$ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $$\\sim 0.04$$ m intrinsic island chain with a $${{130}^{\\circ}}$$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.« less

  12. Predictability of dune activity in real dune fields under unidirectional wind regimes

    NASA Astrophysics Data System (ADS)

    Barchyn, Thomas E.; Hugenholtz, Chris H.

    2015-02-01

    We present an analysis of 10 dune fields to test a model-derived hypothesis of dune field activity. The hypothesis suggests that a quantifiable threshold exists for stabilization in unidirectional wind regimes: active dunes have slipface deposition rates that exceed the vegetation deposition tolerance, and stabilizing dunes have the opposite. We quantified aeolian sand flux, slipface geometry, and vegetation deposition tolerance to directly test the hypothesis at four dune fields (Bigstick, White Sands Stable, White Sands Active, and Cape Cod). We indirectly tested the hypothesis at six additional dune fields with limited vegetation data (Hanford, Año Nuevo, Skagen Odde, Salton Sea, Oceano Stable, and Oceano Active, "inverse calculation sites"). We used digital topographic data and estimates of aeolian sand flux to approximate the slipface deposition rates prior to stabilization. Results revealed a distinct, quantifiable, and consistent pattern despite diverse environmental conditions: the modal peak of prestabilization slipface deposition rates was 80% of the vegetation deposition tolerance at stabilized or stabilizing dune fields. Results from inverse calculation sites indicate deposition rates at stabilized sites were near a hypothesized maximum vegetation deposition tolerance (1 m a-1), and active sites had slipface deposition rates much higher. Overall, these results confirm the hypothesis and provide evidence of a globally applicable, simple, and previously unidentified predictor for the dynamics of vegetation cover in dune fields under unidirectional wind regimes.

  13. Rhizobacteria and plant symbiosis in heavy metal uptake and its implications for soil bioremediation.

    PubMed

    Sobariu, Dana Luminița; Fertu, Daniela Ionela Tudorache; Diaconu, Mariana; Pavel, Lucian Vasile; Hlihor, Raluca-Maria; Drăgoi, Elena Niculina; Curteanu, Silvia; Lenz, Markus; Corvini, Philippe François-Xavier; Gavrilescu, Maria

    2017-10-25

    Certain species of plants can benefit from synergistic effects with plant growth-promoting rhizobacteria (PGPR) that improve plant growth and metal accumulation, mitigating toxic effects on plants and increasing their tolerance to heavy metals. The application of PGPR as biofertilizers and atmospheric nitrogen fixators contributes considerably to the intensification of the phytoremediation process. In this paper, we have built a system consisting of rhizospheric Azotobacter microbial populations and Lepidium sativum plants, growing in solutions containing heavy metals in various concentrations. We examined the ability of the organisms to grow in symbiosis so as to stimulate the plant growth and enhance its tolerance to Cr(VI) and Cd(II), to ultimately provide a reliable phytoremediation system. The study was developed at the laboratory level and, at this stage, does not assess the inherent interactions under real conditions occurring in contaminated fields with autochthonous microflora and under different pedoclimatic conditions and environmental stresses. Azotobacter sp. bacteria could indeed stimulate the average germination efficiency of Lepidium sativum by almost 7%, average root length by 22%, average stem length by 34% and dry biomass by 53%. The growth of L. sativum has been affected to a greater extent in Cd(II) solutions due its higher toxicity compared to that of Cr(VI). The reduced tolerance index (TI, %) indicated that plant growth in symbiosis with PGPR was however affected by heavy metal toxicity, while the tolerance of the plant to heavy metals was enhanced in the bacteria-plant system. A methodology based on artificial neural networks (ANNs) and differential evolution (DE), specifically a neuro-evolutionary approach, was applied to model germination rates, dry biomass and root/stem length and proving the robustness of the experimental data. The errors associated with all four variables are small and the correlation coefficients higher than 0.98, which indicate that the selected models can efficiently predict the experimental data. Copyright © 2016. Published by Elsevier B.V.

  14. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1996-01-01

    We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.

  15. Effects of Optical Combiner and IPD Change for Convergence on Near-Field Depth Perception in an Optical See-Through HMD.

    PubMed

    Lee, Sangyoon; Hu, Xinda; Hua, Hong

    2016-05-01

    Many error sources have been explored in regards to the depth perception problem in augmented reality environments using optical see-through head-mounted displays (OST-HMDs). Nonetheless, two error sources are commonly neglected: the ray-shift phenomenon and the change in interpupillary distance (IPD). The first source of error arises from the difference in refraction for virtual and see-through optical paths caused by an optical combiner, which is required of OST-HMDs. The second occurs from the change in the viewer's IPD due to eye convergence. In this paper, we analyze the effects of these two error sources on near-field depth perception and propose methods to compensate for these two types of errors. Furthermore, we investigate their effectiveness through an experiment comparing the conditions with and without our error compensation methods applied. In our experiment, participants estimated the egocentric depth of a virtual and a physical object located at seven different near-field distances (40∼200 cm) using a perceptual matching task. Although the experimental results showed different patterns depending on the target distance, the results demonstrated that the near-field depth perception error can be effectively reduced to a very small level (at most 1 percent error) by compensating for the two mentioned error sources.

  16. A median filter approach for correcting errors in a vector field

    NASA Technical Reports Server (NTRS)

    Schultz, H.

    1985-01-01

    Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.

  17. 75 FR 68214 - Flubendiamide; Pesticide Tolerances; Technical Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-05

    ... established tolerances for corn, field, grain; corn, field, stover; corn, sweet, stover; and cotton gin... ppm); corn, sweet, stover (0.25 ppm); and cotton gin byproducts (0.60 ppm). As supported by submitted..., stover; corn, sweet, stover; and cotton gin byproducts in the table in Sec. 180.369(a)(1). III. Why is...

  18. PRN 2007-2: Guidance on Small-Scale Field Testing and Low-level Presence in Food of Plant-Incorporated Protectants (PIPs)

    EPA Pesticide Factsheets

    This notice clarifies how EPA ensures the safety of residues of PIPs possibly present in food or feed and when a tolerance or tolerance exemption would be required for field tests for biotechnology-derived food and feed crop plants containing PIPs.

  19. Flooding tolerance of soybean (Glycine max) germplasm from southeast Asia under field and screen-house environment

    USDA-ARS?s Scientific Manuscript database

    Soybean (Glycine max L. Merr.) cultivars from the U.S. are generally intolerant to flooding stress. Soybean germplasm and cultivars originating from other countries and grown in rotations with paddy rice potentially could have better flooding tolerance. Screen-house and field tests were conducted to...

  20. Errors and optics study of a permanent magnet quadrupole system

    NASA Astrophysics Data System (ADS)

    Schillaci, F.; Maggiore, M.; Rifuggiato, D.; Cirrone, G. A. P.; Cuttone, G.; Giove, D.

    2015-05-01

    Laser-based accelerators are gaining interest in recent years as an alternative to conventional machines [1]. Nowadays, energy and angular spread of the laser-driven beams are the main issues in application and different solutions for dedicated beam-transport lines have been proposed [2,3]. In this context a system of permanent magnet quadrupoles (PMQs) is going to be realized by INFN [2] researchers, in collaboration with SIGMAPHI [3] company in France, to be used as a collection and pre-selection system for laser driven proton beams. The definition of well specified characteristics, both in terms of performances and field quality, of the magnetic lenses is crucial for the system realization, for an accurate study of the beam dynamics and the proper matching with a magnetic selection system already realized [6,7]. Hence, different series of simulations have been used for studying the PMQs harmonic contents and stating the mechanical and magnetic tolerances in order to have reasonable good beam quality downstream the system. In this paper is reported the method used for the analysis of the PMQs errors and its validation. Also a preliminary optics characterization is presented in which are compared the effects of an ideal PMQs system with a perturbed system on a monochromatic proton beams.

  1. A chevron beam-splitter interferometer

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.

    1979-01-01

    Fully tilt compensated double-pass chevron beam splitter, that removes channelling effects and permits optical phase tuning, is wavelength independent and allows small errors in alignment that are not tolerated in Michelson, Machzender, or Sagnac interferometers. Device is very useful in experiments where background vibration affects conventional interferometers.

  2. Technique for positioning hologram for balancing large data capacity with fast readout

    NASA Astrophysics Data System (ADS)

    Shimada, Ken-ichi; Hosaka, Makoto; Yamazaki, Kazuyoshi; Onoe, Shinsuke; Ide, Tatsuro

    2017-09-01

    The technical difficulty of balancing large data capacity with a high data transfer rate in holographic data storage systems (HDSSs) is significantly high because of tight tolerances for physical perturbation. From a system margin perspective in terabyte-class HDSSs, the positioning error of a holographic disc should be within about 10 µm to ensure high readout quality. Furthermore, fine control of the positioning should be accomplished within a time frame of about 10 ms for a high data transfer rate of the Gbps class, while a conventional method based on servo control of spindle or sled motors can rarely satisfy the requirement. In this study, a new compensation method for the effect of positioning error, which precisely controls the positioning of a Nyquist aperture instead of a holographic disc, has been developed. The method relaxes the markedly low positional tolerance of a holographic disc. Moreover, owing to the markedly light weight of the aperture, positioning control within the required time frame becomes feasible.

  3. Laboratory test methodology for evaluating the effects of electromagnetic disturbances on fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1989-01-01

    Control systems for advanced aircraft, especially those with relaxed static stability, will be critical to flight and will, therefore, have very high reliability specifications which must be met for adverse as well as nominal operating conditions. Adverse conditions can result from electromagnetic disturbances caused by lightning, high energy radio frequency transmitters, and nuclear electromagnetic pulses. Tools and techniques must be developed to verify the integrity of the control system in adverse operating conditions. The most difficult and illusive perturbations to computer based control systems caused by an electromagnetic environment (EME) are functional error modes that involve no component damage. These error modes are collectively known as upset, can occur simultaneously in all of the channels of a redundant control system, and are software dependent. A methodology is presented for performing upset tests on a multichannel control system and considerations are discussed for the design of upset tests to be conducted in the lab on fault tolerant control systems operating in a closed loop with a simulated plant.

  4. Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.

  5. Investigation of homodyne demodulation of RZ-BPSK signal based on an optical Costas loop

    NASA Astrophysics Data System (ADS)

    Zhou, Haijun; Zhu, Zunzhen; Xie, Weilin; Dong, Yi

    2018-01-01

    We demonstrate the coherent detection of 10 Gb/s return-to-zero (RZ) binary phase-shift keying (BPSK) signal based on a homodyne Costas optical phase-locked loop (OPLL). It demonstrates time misalignment tolerance of +/- 10% of the transmitted RZ-BPSK signal, i.e. -20 to +20 ps between the pulse carver and the phase modulator for 5 Gb/s RZ-BPSK signal, -10 to +10 ps or 10 Gb/s RZ-BPSK signal. Besides, the Costas coherent receiver shows a 2.5 dB sensitivity improvement over conventional 5 Gb/s NRZ-BPSK and a 1.4 dB over 10 Gb/s NRZ-BPSK only at the cost of slightly higher residual phase error. Those merits of sufficient tolerance to misalignment, higher receiver sensitivity, and low residual phase error of RZ-BPSK modulation are beneficial to be applied in free space optical (FSO) communication to achieve higher link budget, longer transmission distance.

  6. Quantum Steering Inequality with Tolerance for Measurement-Setting Errors: Experimentally Feasible Signature of Unbounded Violation

    NASA Astrophysics Data System (ADS)

    Rutkowski, Adam; Buraczewski, Adam; Horodecki, Paweł; Stobińska, Magdalena

    2017-01-01

    Quantum steering is a relatively simple test for proving that the values of quantum-mechanical measurement outcomes come into being only in the act of measurement. By exploiting quantum correlations, Alice can influence—steer—Bob's physical system in a way that is impossible in classical mechanics, as shown by the violation of steering inequalities. Demonstrating this and similar quantum effects for systems of increasing size, approaching even the classical limit, is a long-standing challenging problem. Here, we prove an experimentally feasible unbounded violation of a steering inequality. We derive its universal form where tolerance for measurement-setting errors is explicitly built in by means of the Deutsch-Maassen-Uffink entropic uncertainty relation. Then, generalizing the mutual unbiasedness, we apply the inequality to the multisinglet and multiparticle bipartite Bell state. However, the method is general and opens the possibility of employing multiparticle bipartite steering for randomness certification and development of quantum technologies, e.g., random access codes.

  7. Punched belt hole position deviation analysis of float type water level gauge

    NASA Astrophysics Data System (ADS)

    Mao, Chunlei; Wang, Tao; Fu, Weijie; Li, Lianhui

    2018-03-01

    The key parts of the float type water level gauge instrument is perforated belt, The size and tolerance requirements of its aperture is: (1) alternation of 100+0.2 and 100-0.2, (2) 200±0.1, (3) 1000±0.15, (4) 10000±0.2. The single hole position: alternation of 100+0.2 and 100-0.2; double: 200±0.1, and ensure the best hole position error avoidance tends to be one-way, that is to say: when the punched belt combined with a water wheel rotating line moving, The hole position error to single direction increase or decrease, caused the water level nail gradually and close to the edge of the hole, and then edge and final punched belt was lifted. This paper uses the laser drilling process of steel strip for data collection and analysis. It is found that this method cannot meet the tolerance requirements and the double stamping processing method with adjustable cylindrical pin is feasible.

  8. Estimating Aboveground Biomass in Tropical Forests: Field Methods and Error Analysis for the Calibration of Remote Sensing Observations

    DOE PAGES

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...

    2017-01-07

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  9. Error suppression via complementary gauge choices in Reed-Muller codes

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Jochym-O'Connor, Tomas

    2017-09-01

    Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.

  10. Effect of ancilla's structure on quantum error correction using the seven-qubit Calderbank-Shor-Steane code

    NASA Astrophysics Data System (ADS)

    Salas, P. J.; Sanz, A. L.

    2004-05-01

    In this work we discuss the ability of different types of ancillas to control the decoherence of a qubit interacting with an environment. The error is introduced into the numerical simulation via a depolarizing isotropic channel. The ranges of values considered are 10-4 ⩽ɛ⩽ 10-2 for memory errors and 3× 10-5 ⩽γ/7⩽ 10-2 for gate errors. After the correction we calculate the fidelity as a quality criterion for the qubit recovered. We observe that a recovery method with a three-qubit ancilla provides reasonably good results bearing in mind its economy. If we want to go further, we have to use fault tolerant ancillas with a high degree of parallelism, even if this condition implies introducing additional ancilla verification qubits.

  11. Improved Quality in Aerospace Testing Through the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.

  12. Error field measurement, correction and heat flux balancing on Wendelstein 7-X

    DOE PAGES

    Lazerson, Samuel A.; Otte, Matthias; Jakubowski, Marcin; ...

    2017-03-10

    The measurement and correction of error fields in Wendelstein 7-X (W7-X) is critical to long pulse high beta operation, as small error fields may cause overloading of divertor plates in some configurations. Accordingly, as part of a broad collaborative effort, the detection and correction of error fields on the W7-X experiment has been performed using the trim coil system in conjunction with the flux surface mapping diagnostic and high resolution infrared camera. In the early commissioning phase of the experiment, the trim coils were used to open an n/m = 1/2 island chain in a specially designed magnetic configuration. Themore » flux surfacing mapping diagnostic was then able to directly image the magnetic topology of the experiment, allowing the inference of a small similar to 4 cm intrinsic island chain. The suspected main sources of the error field, slight misalignment and deformations of the superconducting coils, are then confirmed through experimental modeling using the detailed measurements of the coil positions. Observations of the limiters temperatures in module 5 shows a clear dependence of the limiter heat flux pattern as the perturbing fields are rotated. Plasma experiments without applied correcting fields show a significant asymmetry in neutral pressure (centered in module 4) and light emission (visible, H-alpha, CII, and CIII). Such pressure asymmetry is associated with plasma-wall (limiter) interaction asymmetries between the modules. Application of trim coil fields with n = 1 waveform correct the imbalance. Confirmation of the error fields allows the assessment of magnetic fields which resonate with the n/m = 5/5 island chain.« less

  13. Preservation of potassium balance is strongly associated with insect cold tolerance in the field: a seasonal study of Drosophila subobscura.

    PubMed

    MacMillan, Heath A; Schou, Mads F; Kristensen, Torsten N; Overgaard, Johannes

    2016-05-01

    There is interest in pinpointing genes and physiological mechanisms explaining intra- and interspecific variations in cold tolerance, because thermal tolerance phenotypes strongly impact the distribution and abundance of wild animals. Laboratory studies have highlighted that the capacity to preserve water and ion homeostasis is linked to low temperature survival in insects. It remains unknown, however, whether adaptive seasonal acclimatization in free-ranging insects is governed by the same physiological mechanisms. Here, we test whether cold tolerance in field-caught Drosophila subobscura is high in early spring and lower during summer and whether this transition is associated with seasonal changes in the capacity of flies to preserve water and ion balance during cold stress. Indeed, flies caught during summer were less cold tolerant, and exposure of these flies to sub-zero temperatures caused a loss of haemolymph water and increased the concentration of K(+) in the haemolymph (as in laboratory-reared insects). This pattern of ion and water balance disruption was not observed in more cold-tolerant flies caught in early spring. Thus, we here provide a field verification of hypotheses based on laboratory studies and conclude that the ability to maintain ion homeostasis is important for the ability of free-ranging insects to cope with chilling. © 2016 The Author(s).

  14. Impact of shorter wavelengths on optical quality for laws

    NASA Technical Reports Server (NTRS)

    Wissinger, Alan B.; Noll, Robert J.; Tsacoyeanes, James G.; Tausanovitch, Jeanette R.

    1993-01-01

    This study explores parametrically as a function of wavelength the degrading effects of several common optical aberrations (defocus, astigmatism, wavefronttilts, etc.), using the heterodyne mixing efficiency factor as the merit function. A 60 cm diameter aperture beam expander with an expansion ratio of 15:1 and a primary mirror focal ratio of f/2 was designed for the study. An HDOS copyrighted analysis program determined the value of merit function for various optical misalignments. With sensitivities provided by the analysis, preliminary error budget and tolerance allocations were made for potential optical wavefront errors and boresight errors during laser shot transit time. These were compared with the baseline 1.5 m CO2 laws and the optical fabrication state of the art (SOA) as characterized by the Hubble Space Telescope. Reducing wavelength and changing optical design resulted in optical quality tolerances within the SOA both at 2 and 1 micrometer. However, advanced sensing and control devices would be necessary to be tightened by a factory of 1.8 for a 2 micrometer system and by 3.6 for a 1 micrometer system relative to the baseline CO2 LAWS. Available SOA components could be used for operation at 2 micrometers but operation at 1 micrometer does not appear feasible.

  15. Space charge enhanced plasma gradient effects on satellite electric field measurements

    NASA Technical Reports Server (NTRS)

    Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.

    1991-01-01

    It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.

  16. Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.

    PubMed

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.

  17. Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation

    PubMed Central

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672

  18. Selection and characterization of glyphosate tolerance in birdsfoot trefoil (Lotus corniculatus)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boerboom, C.M.

    1989-01-01

    If birdsfoot trefoil (Lotus corniculatus L.) was tolerant to glyphosate (N-(phosphonomethyl)glycine), Canada thistle (Cirsium arvense (L.) Scop.) and other dicot weeds could be selectively controlled in certified seed production fields. Glyphosate tolerance in birdsfoot trefoil was identified in plants from the cultivar Leo, plants regenerated from tolerant callus, and selfed progeny of plants regenerated from callus. Plants from the three sources were evaluated in field studies for tolerance to glyphosate at rates up to 1.6 kg ae/ha. Plants of Leo selected for tolerance exhibited a twofold range in the rate required to reduce shoot weight 50% (I{sub 50}s from 0.6more » to 1.2 kg/ha glyphosate). Plants regenerated from tolerant callus had tolerance up to 66% greater than plants regenerated from unselected callus. Transgressive segregation for glyphosate tolerance was observed in the selfed progeny of two regenerated plants that both had I{sub 50}s of 0.7 kg/ha glyphosate. The selfed progeny ranged from highly tolerant (I{sub 50} of 1.5 kg/ha) to susceptible (I{sub 50} of 0.5 kg/ha). Spray retention, {sup 14}C-glyphosate absorption and translocation did not account for the differential tolerance of nine plants that were evaluated from the three sources. The specific activity of 5-enolpyruvylshikimate 3-phosphate (EPSP) synthase ranged from 1.3 to 3.5 nmol/min{sm bullet}mg among the nine plants and was positively correlated with glyphosate tolerance. Leo birdsfoot trefoil was found to have significant variation in glyphosate tolerance which made it possible to initiate a recurrent selection program to select for glyphosate tolerance in birdsfoot trefoil. Two cycles of selection for glyphosate tolerance were practiced in three birdsfoot trefoil populations, Leo, Norcen, and MU-81.« less

  19. Measurement of Fracture Aperture Fields Using Ttransmitted Light: An Evaluation of Measurement Errors and their Influence on Simulations of Flow and Transport through a Single Fracture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.

    Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity tomore » error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.« less

  20. The Michelson Stellar Interferometer Error Budget for Triple Triple-Satellite Configuration

    NASA Technical Reports Server (NTRS)

    Marathay, Arvind S.; Shiefman, Joe

    1996-01-01

    This report presents the results of a study of the instrumentation tolerances for a conventional style Michelson stellar interferometer (MSI). The method used to determine the tolerances was to determine the change, due to the instrument errors, in the measured fringe visibility and phase relative to the ideal values. The ideal values are those values of fringe visibility and phase that would be measured by a perfect MSI and are attributable solely to the object being detected. Once the functional relationship for changes in visibility and phase as a function of various instrument errors is understood it is then possible to set limits on the instrument errors in order to ensure that the measured visibility and phase are different from the ideal values by no more than some specified amount. This was done as part of this study. The limits we obtained are based on a visibility error of no more than 1% and a phase error of no more than 0.063 radians (this comes from 1% of 2(pi) radians). The choice of these 1% limits is supported in the literture. The approach employed in the study involved the use of ASAP (Advanced System Analysis Program) software provided by Breault Research Organization, Inc., in conjunction with parallel analytical calculations. The interferometer accepts object radiation into two separate arms each consisting of an outer mirror, an inner mirror, a delay line (made up of two moveable mirrors and two static mirrors), and a 10:1 afocal reduction telescope. The radiation coming out of both arms is incident on a slit plane which is opaque with two openings (slits). One of the two slits is centered directly under one of the two arms of the interferometer and the other slit is centered directly under the other arm. The slit plane is followed immediately by an ideal combining lens which images the radiation in the fringe plane (also referred to subsequently as the detector plane).

  1. The study of CD side to side error in line/space pattern caused by post-exposure bake effect

    NASA Astrophysics Data System (ADS)

    Huang, Jin; Guo, Eric; Ge, Haiming; Lu, Max; Wu, Yijun; Tian, Mingjing; Yan, Shichuan; Wang, Ran

    2016-10-01

    In semiconductor manufacturing, as the design rule has decreased, the ITRS roadmap requires crucial tighter critical dimension (CD) control. CD uniformity is one of the necessary parameters to assure good performance and reliable functionality of any integrated circuit (IC) [1] [2], and towards the advanced technology nodes, it is a challenge to control CD uniformity well. The study of corresponding CD Uniformity by tuning Post-Exposure bake (PEB) and develop process has some significant progress[3], but CD side to side error happening to some line/space pattern are still found in practical application, and the error has approached to over the uniformity tolerance. After details analysis, even though use several developer types, the CD side to side error has not been found significant relationship to the developing. In addition, it is impossible to correct the CD side to side error by electron beam correction as such error does not appear in all Line/Space pattern masks. In this paper the root cause of CD side to side error is analyzed and the PEB module process are optimized as a main factor for improvement of CD side to side error.

  2. Tolerance allocation for an electronic system using neural network/Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque

    2001-12-01

    The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.

  3. Sliding Mode Fault Tolerant Control with Adaptive Diagnosis for Aircraft Engines

    NASA Astrophysics Data System (ADS)

    Xiao, Lingfei; Du, Yanbin; Hu, Jixiang; Jiang, Bin

    2018-03-01

    In this paper, a novel sliding mode fault tolerant control method is presented for aircraft engine systems with uncertainties and disturbances on the basis of adaptive diagnostic observer. By taking both sensors faults and actuators faults into account, the general model of aircraft engine control systems which is subjected to uncertainties and disturbances, is considered. Then, the corresponding augmented dynamic model is established in order to facilitate the fault diagnosis and fault tolerant controller design. Next, a suitable detection observer is designed to detect the faults effectively. Through creating an adaptive diagnostic observer and based on sliding mode strategy, the sliding mode fault tolerant controller is constructed. Robust stabilization is discussed and the closed-loop system can be stabilized robustly. It is also proven that the adaptive diagnostic observer output errors and the estimations of faults converge to a set exponentially, and the converge rate greater than some value which can be adjusted by choosing designable parameters properly. The simulation on a twin-shaft aircraft engine verifies the applicability of the proposed fault tolerant control method.

  4. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    NASA Astrophysics Data System (ADS)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  5. Aspheres for high speed cine lenses

    NASA Astrophysics Data System (ADS)

    Beder, Christian

    2005-09-01

    To fulfil the requirements of today's high performance cine lenses aspheres are an indispensable part of lens design. Among making them manageable in shape and size, tolerancing aspheres is an essential part of the development process. The traditional method of tolerancing individual aspherical coefficients results in unemployable theoretical figures only. In order to obtain viable parameters that can easily be dealt with in a production line, more enhanced techniques are required. In this presentation, a method of simulating characteristic manufacturing errors and deducing surface deviation and slope error tolerances will be shown.

  6. Neural-Network-Based Adaptive Decentralized Fault-Tolerant Control for a Class of Interconnected Nonlinear Systems.

    PubMed

    Li, Xiao-Jian; Yang, Guang-Hong

    2018-01-01

    This paper is concerned with the adaptive decentralized fault-tolerant tracking control problem for a class of uncertain interconnected nonlinear systems with unknown strong interconnections. An algebraic graph theory result is introduced to address the considered interconnections. In addition, to achieve the desirable tracking performance, a neural-network-based robust adaptive decentralized fault-tolerant control (FTC) scheme is given to compensate the actuator faults and system uncertainties. Furthermore, via the Lyapunov analysis method, it is proven that all the signals of the resulting closed-loop system are semiglobally bounded, and the tracking errors of each subsystem exponentially converge to a compact set, whose radius is adjustable by choosing different controller design parameters. Finally, the effectiveness and advantages of the proposed FTC approach are illustrated with two simulated examples.

  7. One-shot and aberration-tolerable homodyne detection for holographic storage readout through double-frequency grating-based lateral shearing interferometry.

    PubMed

    Yu, Yeh-Wei; Xiao, Shuai; Cheng, Chih-Yuan; Sun, Ching-Cherng

    2016-05-16

    A simple method to decode the stored phase signal of volume holographic data storage with adequate wave aberration tolerance is highly demanded. We proposed and demonstrated a one-shot scheme to decode a binary-phase encoding signal through double-frequency-grating based shearing interferometry (DFGSI). The lateral shearing amount is dependent on the focal length of the collimated lens and the frequency difference between the gratings. Diffracted waves with phase encoding were successfully decoded through experimentation. An optical model for the DFGSI was built to analyze phase-error induction and phase-difference control by shifting the double-frequency grating longitudinally and laterally, respectively. The optical model was demonstrated experimentally. Finally, a high aberration tolerance of the DFGSI was demonstrated using the optical model.

  8. Mise en oeuvre et caracterisation d'une methode d'injection de pannes a haut niveau d'abstraction

    NASA Astrophysics Data System (ADS)

    Robache, Remi

    Nowadays, the effects of cosmic rays on electronics are well known. Different studies have demonstrated that neutrons are the main cause of non-destructive errors in embedded circuits on airplanes. Moreover, the reduction of transistor sizes is making all circuits more sensitive to those effects. Radiation tolerant circuits are sometimes used in order to improve the robustness of circuits. However, those circuits are expensive and their technologies often lag a few generations behind compared to non-tolerant circuits. Designers prefer to use conventional circuits with mitigation techniques to improve the tolerance to soft errors. It is necessary to analyse and verify the dependability of a circuit throughout its design process. Conventional design methodologies need to be adapted in order to evaluate the tolerance to non-destructive errors caused by radiations. Nowadays, designers need new tools and new methodologies to validate their mitigation strategies if they are to meet system requirements. In this thesis, we are proposing a new methodology allowing to capture the faulty behavior of a circuit at a low level of abstraction and to apply it at a higher level. In order to do that, we are introducing the new concept of faulty behavior Signatures that allows creating, at a high level of abstraction (system level) models that reflect with high fidelity the faulty behavior of a circuit learned at a low level of abstraction, at gate level. We successfully replicated the faulty behavior of an 8 bit adder and multiplier with Simulink, with respectively a correlation coefficient of 98.53% and 99.86%. We are proposing a methodology that permits to generate a library of faulty components, with Simulink, allowing designers to verify the dependability of their models early in the design flow. We are presenting and analyzing our results obtained for three different circuits throughout this thesis. Within the framework of this project a paper was published at the NEWCAS 2013 conference (Robache et al., 2013). This works presents the new concept of faulty behavior Signature, the methodology for generating Signatures we developed and also our experiments with an 8bit multiplier.

  9. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  10. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  11. TH-AB-202-02: Real-Time Verification and Error Detection for MLC Tracking Deliveries Using An Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J Zwan, B; Central Coast Cancer Centre, Gosford, NSW; Colvill, E

    2016-06-15

    Purpose: The added complexity of the real-time adaptive multi-leaf collimator (MLC) tracking increases the likelihood of undetected MLC delivery errors. In this work we develop and test a system for real-time delivery verification and error detection for MLC tracking radiotherapy using an electronic portal imaging device (EPID). Methods: The delivery verification system relies on acquisition and real-time analysis of transit EPID image frames acquired at 8.41 fps. In-house software was developed to extract the MLC positions from each image frame. Three comparison metrics were used to verify the MLC positions in real-time: (1) field size, (2) field location and, (3)more » field shape. The delivery verification system was tested for 8 VMAT MLC tracking deliveries (4 prostate and 4 lung) where real patient target motion was reproduced using a Hexamotion motion stage and a Calypso system. Sensitivity and detection delay was quantified for various types of MLC and system errors. Results: For both the prostate and lung test deliveries the MLC-defined field size was measured with an accuracy of 1.25 cm{sup 2} (1 SD). The field location was measured with an accuracy of 0.6 mm and 0.8 mm (1 SD) for lung and prostate respectively. Field location errors (i.e. tracking in wrong direction) with a magnitude of 3 mm were detected within 0.4 s of occurrence in the X direction and 0.8 s in the Y direction. Systematic MLC gap errors were detected as small as 3 mm. The method was not found to be sensitive to random MLC errors and individual MLC calibration errors up to 5 mm. Conclusion: EPID imaging may be used for independent real-time verification of MLC trajectories during MLC tracking deliveries. Thresholds have been determined for error detection and the system has been shown to be sensitive to a range of delivery errors.« less

  12. Top-K Interesting Subgraph Discovery in Information Networks

    DTIC Science & Technology

    2014-03-03

    Integrative Biomarker Discovery for Breast Cancer Metastasis from Gene Expression and Protein Interaction Data Using Error-tolerant Pattern Mining” at...Jiawei Han¶ ∗Microsoft, India . Email: gmanish@microsoft.com †State University of New York at Buffalo. Email: jing@buffalo.edu ‡University of California

  13. Effect of Tillage System, Row Spacing and Herbicide Technologies on Plant Growth and Lint Yield in Cotton

    USDA-ARS?s Scientific Manuscript database

    A field study was conducted from 2004 through 2006 growing seasons at the E.V. Smith Research Center, Field Crops Unit near Shorter, AL, to compare a conventional variety, a glyphosate tolerant variety, and a glufosinate tolerant variety under both the conventional tillage and the conservation tilla...

  14. 75 FR 17564 - Chlorantraniliprole; Extension of Time-Limited Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-07

    ... at 0.20 ppm; grass, forage, fodder and hay, crop group 17 at 0.20 ppm; vegetable, leaves of root and... hay (includes cowpea, forage and hay; field pea, vines and hay); grass, forage, fodder and hay, crop...-limited tolerances for cowpea, forage and hay; field pea, vines and hay; grass, forage, fodder and hay...

  15. Albertian errors in head-mounted displays: I. Choice of eye-point location for a near- or far-field task visualization.

    PubMed

    Rolland, Jannick; Ha, Yonggang; Fidopiastis, Cali

    2004-06-01

    A theoretical investigation of rendered depth and angular errors, or Albertian errors, linked to natural eye movements in binocular head-mounted displays (HMDs) is presented for three possible eye-point locations: the center of the entrance pupil, the nodal point, and the center of rotation of the eye. A numerical quantification was conducted for both the pupil and the center of rotation of the eye under the assumption that the user will operate solely in either the near field under an associated instrumentation setting or the far field under a different setting. Under these conditions, the eyes are taken to gaze in the plane of the stereoscopic images. Across conditions, results show that the center of the entrance pupil minimizes rendered angular errors, while the center of rotation minimizes rendered position errors. Significantly, this investigation quantifies that under proper setting of the HMD and correct choice of the eye points, rendered depth and angular errors can be brought to be either negligible or within specification of even the most stringent applications in performance of tasks in either the near field or the far field.

  16. Executive Council lists and general practitioner files

    PubMed Central

    Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.

    1974-01-01

    An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588

  17. Orbital-angular-momentum-multiplexed free-space optical communication link using transmitter lenses.

    PubMed

    Li, Long; Xie, Guodong; Ren, Yongxiong; Ahmed, Nisar; Huang, Hao; Zhao, Zhe; Liao, Peicheng; Lavery, Martin P J; Yan, Yan; Bao, ChangJing; Wang, Zhe; Willner, Asher J; Ashrafi, Nima; Ashrafi, Solyman; Tur, Moshe; Willner, Alan E

    2016-03-10

    In this paper, we explore the potential benefits and limitations of using transmitter lenses in an orbital-angular-momentum (OAM)-multiplexed free-space optical (FSO) communication link. Both simulation and experimental results indicate that within certain transmission distances, using lenses at the transmitter to focus OAM beams could reduce power loss in OAM-based FSO links and that this improvement might be more significant for higher-order OAM beams. Moreover, the use of transmitter lenses could enhance system tolerance to angular error between transmitter and receiver, but they might degrade tolerance to lateral displacement.

  18. RAMP: A fault tolerant distributed microcomputer structure for aircraft navigation and control

    NASA Technical Reports Server (NTRS)

    Dunn, W. R.

    1980-01-01

    RAMP consists of distributed sets of parallel computers partioned on the basis of software and packaging constraints. To minimize hardware and software complexity, the processors operate asynchronously. It was shown that through the design of asymptotically stable control laws, data errors due to the asynchronism were minimized. It was further shown that by designing control laws with this property and making minor hardware modifications to the RAMP modules, the system became inherently tolerant to intermittent faults. A laboratory version of RAMP was constructed and is described in the paper along with the experimental results.

  19. High speed, long distance, data transmission multiplexing circuit

    DOEpatents

    Mariotti, Razvan

    1991-01-01

    A high speed serial data transmission multiplexing circuit, which is operable to accurately transmit data over long distances (up to 3 Km), and to multiplex, select and continuously display real time analog signals in a bandwidth from DC to 100 Khz. The circuit is made fault tolerant by use of a programmable flywheel algorithm, which enables the circuit to tolerate one transmission error before losing synchronization of the transmitted frames of data. A method of encoding and framing captured and transmitted data is used which has a low overhead and prevents some particular transmitted data patterns from locking an included detector/decoder circuit.

  20. Fault tolerant computing: A preamble for assuring viability of large computer systems

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1977-01-01

    The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.

  1. Spine stereotactic body radiotherapy utilizing cone-beam CT image-guidance with a robotic couch: intrafraction motion analysis accounting for all six degrees of freedom.

    PubMed

    Hyde, Derek; Lochray, Fiona; Korol, Renee; Davidson, Melanie; Wong, C Shun; Ma, Lijun; Sahgal, Arjun

    2012-03-01

    To evaluate the residual setup error and intrafraction motion following kilovoltage cone-beam CT (CBCT) image guidance, for immobilized spine stereotactic body radiotherapy (SBRT) patients, with positioning corrected for in all six degrees of freedom. Analysis is based on 42 consecutive patients (48 thoracic and/or lumbar metastases) treated with a total of 106 fractions and 307 image registrations. Following initial setup, a CBCT was acquired for patient alignment and a pretreatment CBCT taken to verify shifts and determine the residual setup error, followed by a midtreatment and posttreatment CBCT image. For 13 single-fraction SBRT patients, two midtreatment CBCT images were obtained. Initially, a 1.5-mm and 1° tolerance was used to reposition the patient following couch shifts which was subsequently reduced to 1 mm and 1° degree after the first 10 patients. Small positioning errors after the initial CBCT setup were observed, with 90% occurring within 1 mm and 97% within 1°. In analyzing the impact of the time interval for verification imaging (10 ± 3 min) and subsequent image acquisitions (17 ± 4 min), the residual setup error was not significantly different (p > 0.05). A significant difference (p = 0.04) in the average three-dimensional intrafraction positional deviations favoring a more strict tolerance in translation (1 mm vs. 1.5 mm) was observed. The absolute intrafraction motion averaged over all patients and all directions along x, y, and z axis (± SD) were 0.7 ± 0.5 mm and 0.5 ± 0.4 mm for the 1.5 mm and 1 mm tolerance, respectively. Based on a 1-mm and 1° correction threshold, the target was localized to within 1.2 mm and 0.9° with 95% confidence. Near-rigid body immobilization, intrafraction CBCT imaging approximately every 15-20 min, and strict repositioning thresholds in six degrees of freedom yields minimal intrafraction motion allowing for safe spine SBRT delivery. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Neural Network and Letter Recognition.

    NASA Astrophysics Data System (ADS)

    Lee, Hue Yeon

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C -layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the 'Gabor' transform. Pattern dependent choice of center and wavelengths of 'Gabor' filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets. The correct recognition rate of the system increases with the number of training sets and eventually saturates at a certain value. Similar recognition rates are obtained for the above three different learning algorithms. The minimum error rate, 4.9% is achieved for alphanumeric sets when 50 sets are trained. With the ambiguity resolver, it is reduced to 2.5%. In case that only numeral sets are trained and tested, 2.0% error rate is achieved. When only alphabet sets are considered, the error rate is reduced to 1.1%.

  3. Proxy-equation paradigm: A strategy for massively parallel asynchronous computations

    NASA Astrophysics Data System (ADS)

    Mittal, Ankita; Girimaji, Sharath

    2017-09-01

    Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.

  4. Runtime Verification in Context : Can Optimizing Error Detection Improve Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Dwyer, Matthew B.; Purandare, Rahul; Person, Suzette

    2010-01-01

    Runtime verification has primarily been developed and evaluated as a means of enriching the software testing process. While many researchers have pointed to its potential applicability in online approaches to software fault tolerance, there has been a dearth of work exploring the details of how that might be accomplished. In this paper, we describe how a component-oriented approach to software health management exposes the connections between program execution, error detection, fault diagnosis, and recovery. We identify both research challenges and opportunities in exploiting those connections. Specifically, we describe how recent approaches to reducing the overhead of runtime monitoring aimed at error detection might be adapted to reduce the overhead and improve the effectiveness of fault diagnosis.

  5. Impact of gradient timing error on the tissue sodium concentration bioscale measured using flexible twisted projection imaging

    NASA Astrophysics Data System (ADS)

    Lu, Aiming; Atkinson, Ian C.; Vaughn, J. Thomas; Thulborn, Keith R.

    2011-12-01

    The rapid biexponential transverse relaxation of the sodium MR signal from brain tissue requires efficient k-space sampling for quantitative imaging in a time that is acceptable for human subjects. The flexible twisted projection imaging (flexTPI) sequence has been shown to be suitable for quantitative sodium imaging with an ultra-short echo time to minimize signal loss. The fidelity of the k-space center location is affected by the readout gradient timing errors on the three physical axes, which is known to cause image distortion for projection-based acquisitions. This study investigated the impact of these timing errors on the voxel-wise accuracy of the tissue sodium concentration (TSC) bioscale measured with the flexTPI sequence. Our simulations show greater than 20% spatially varying quantification errors when the gradient timing errors are larger than 10 μs on all three axes. The quantification is more tolerant of gradient timing errors on the Z-axis. An existing method was used to measure the gradient timing errors with <1 μs error. The gradient timing error measurement is shown to be RF coil dependent, and timing error differences of up to ˜16 μs have been observed between different RF coils used on the same scanner. The measured timing errors can be corrected prospectively or retrospectively to obtain accurate TSC values.

  6. Sources of uncertainty in the quantification of genetically modified oilseed rape contamination in seed lots.

    PubMed

    Begg, Graham S; Cullen, Danny W; Iannetta, Pietro P M; Squire, Geoff R

    2007-02-01

    Testing of seed and grain lots is essential in the enforcement of GM labelling legislation and needs reliable procedures for which associated errors have been identified and minimised. In this paper we consider the testing of oilseed rape seed lots obtained from the harvest of a non-GM crop known to be contaminated by volunteer plants from a GM herbicide tolerant variety. The objective was to identify and quantify the error associated with the testing of these lots from the initial sampling to completion of the real-time PCR assay with which the level of GM contamination was quantified. The results showed that, under the controlled conditions of a single laboratory, the error associated with the real-time PCR assay to be negligible in comparison with sampling error, which was exacerbated by heterogeneity in the distribution of GM seeds, most notably at a small scale, i.e. 25 cm3. Sampling error was reduced by one to two thirds on the application of appropriate homogenisation procedures.

  7. Effect of phase errors in stepped-frequency radar systems

    NASA Astrophysics Data System (ADS)

    Vanbrundt, H. E.

    1988-04-01

    Stepped-frequency waveforms are being considered for inverse synthetic aperture radar (ISAR) imaging from ship and airborne platforms and for detailed radar cross section (RCS) measurements of ships and aircraft. These waveforms make it possible to achieve resolutions of 1.0 foot by using existing radar designs and processing technology. One problem not yet fully resolved in using stepped-frequency waveform for ISAR imaging is the deterioration in signal level caused by random frequency error. Random frequency error of the stepped-frequency source results in reduced peak responses and increased null responses. The resulting reduced signal-to-noise ratio is range dependent. Two of the major concerns addressed in this report are radar range limitations for ISAR and the error in calibration for RCS measurements caused by differences in range between a passive reflector used for an RCS reference and the target to be measured. In addressing these concerns, NOSC developed an analysis to assess the tolerable frequency error in terms of resulting power loss in signal power and signal-to-phase noise.

  8. THE EVALUATION OF METHODS FOR CREATING DEFENSIBLE, REPEATABLE, OBJECTIVE AND ACCURATE TOLERANCE VALUES

    EPA Science Inventory

    In the field of bioassessment, tolerance has traditionally referred to the degree to which organisms can withstand environmental degradation. This concept has been around for many years and its use is widespread. In numerous cases, tolerance values (TVs) have been assigned to i...

  9. Greenhouse and Field Nursery Evaluation for Potato Common Scab Tolerance in a Tetraploid Population

    USDA-ARS?s Scientific Manuscript database

    Potato common scab (Streptomyces scabies (Thaxt.) Waksman & Henrici) is a major disease of potato (Solanum tuberosum L.), due to the unmarketability of affected tubers. For identification of the most common scab-tolerant material, and for developing molecular markers for common scab tolerance, more ...

  10. Brushed permanent magnet DC MLC motor operation in an external magnetic field.

    PubMed

    Yun, J; St Aubin, J; Rathee, S; Fallone, B G

    2010-05-01

    Linac-MR systems for real-time image-guided radiotherapy will utilize the multileaf collimators (MLCs) to perform conformal radiotherapy and tumor tracking. The MLCs would be exposed to the external fringe magnetic fields of the linac-MR hybrid systems. Therefore, an experimental investigation of the effect of an external magnetic field on the brushed permanent magnet DC motors used in some MLC systems was performed. The changes in motor speed and current were measured for varying external magnetic field strengths up to 2000 G generated by an EEV electromagnet. These changes in motor characteristics were measured for three orientations of the motor in the external magnetic field, mimicking changes in motor orientations due to installation and/or collimator rotations. In addition, the functionality of the associated magnetic motor encoder was tested. The tested motors are used with the Varian 120 leaf Millennium MLC (Maxon Motor half leaf and full leaf motors) and the Varian 52 leaf MKII MLC (MicroMo Electronics leaf motor) including a carriage motor (MicroMo Electronics). In most cases, the magnetic encoder of the motors failed prior to any damage to the gearbox or the permanent magnet motor itself. This sets an upper limit of the external magnetic field strength on the motor function. The measured limits of the external magnetic fields were found to vary by the motor type. The leaf motor used with a Varian 52 leaf MKII MLC system tolerated up to 450 +/- 10 G. The carriage motor tolerated up to 2000 +/- 10 G field. The motors used with the Varian 120 leaf Millennium MLC system were found to tolerate a maximum of 600 +/- 10 G. The current Varian MLC system motors can be used for real-time image-guided radiotherapy coupled to a linac-MR system, provided the fringe magnetic fields at their locations are below the determined tolerance levels. With the fringe magnetic fields of linac-MR systems expected to be larger than the tolerance levels determined, some form of magnetic shielding would be required.

  11. An advanced SEU tolerant latch based on error detection

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao

    2018-05-01

    This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).

  12. Apollo 15 mission report: Apollo 15 guidance, navigation, and control system performance analysis report (supplement 1)

    NASA Technical Reports Server (NTRS)

    1972-01-01

    This report contains the results of additional studies which were conducted to confirm the conclusions of the MSC Mission Report and contains analyses which were not completed in time to meet the mission report deadline. The LM IMU data were examined during the lunar descent and ascent phases. Most of the PGNCS descent absolute velocity error was caused by platform misalignments. PGNCS radial velocity divergence from AGS during the early part of descent was partially caused by PGNCS gravity computation differences from AGS. The remainder of the differences between PGNCS and AGS velocity were easily attributable to attitude reference alignment differences and tolerable instrument errors. For ascent the PGNCS radial velocity error at insertion was examined. The total error of 10.8 ft/sec was well within mission constraints but larger than expected. Of the total error, 2.30 ft/sec was PIPA bias error, which was suspected to exist pre-lunar liftoff. The remaining 8.5 ft/sec is most probably satisified with a large pre-liftoff planform misalignment.

  13. Nonuniform code concatenation for universal fault-tolerant quantum computing

    NASA Astrophysics Data System (ADS)

    Nikahd, Eesa; Sedighi, Mehdi; Saheb Zamani, Morteza

    2017-09-01

    Using transversal gates is a straightforward and efficient technique for fault-tolerant quantum computing. Since transversal gates alone cannot be computationally universal, they must be combined with other approaches such as magic state distillation, code switching, or code concatenation to achieve universality. In this paper we propose an alternative approach for universal fault-tolerant quantum computing, mainly based on the code concatenation approach proposed in [T. Jochym-O'Connor and R. Laflamme, Phys. Rev. Lett. 112, 010505 (2014), 10.1103/PhysRevLett.112.010505], but in a nonuniform fashion. The proposed approach is described based on nonuniform concatenation of the 7-qubit Steane code with the 15-qubit Reed-Muller code, as well as the 5-qubit code with the 15-qubit Reed-Muller code, which lead to two 49-qubit and 47-qubit codes, respectively. These codes can correct any arbitrary single physical error with the ability to perform a universal set of fault-tolerant gates, without using magic state distillation.

  14. Fault Mitigation Schemes for Future Spaceflight Multicore Processors

    NASA Technical Reports Server (NTRS)

    Alexander, James W.; Clement, Bradley J.; Gostelow, Kim P.; Lai, John Y.

    2012-01-01

    Future planetary exploration missions demand significant advances in on-board computing capabilities over current avionics architectures based on a single-core processing element. The state-of-the-art multi-core processor provides much promise in meeting such challenges while introducing new fault tolerance problems when applied to space missions. Software-based schemes are being presented in this paper that can achieve system-level fault mitigation beyond that provided by radiation-hard-by-design (RHBD). For mission and time critical applications such as the Terrain Relative Navigation (TRN) for planetary or small body navigation, and landing, a range of fault tolerance methods can be adapted by the application. The software methods being investigated include Error Correction Code (ECC) for data packet routing between cores, virtual network routing, Triple Modular Redundancy (TMR), and Algorithm-Based Fault Tolerance (ABFT). A robust fault tolerance framework that provides fail-operational behavior under hard real-time constraints and graceful degradation will be demonstrated using TRN executing on a commercial Tilera(R) processor with simulated fault injections.

  15. Detection of Error Related Neuronal Responses Recorded by Electrocorticography in Humans during Continuous Movements

    PubMed Central

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2013-01-01

    Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315

  16. Searching Databases without Query-Building Aids: Implications for Dyslexic Users

    ERIC Educational Resources Information Center

    Berget, Gerd; Sandnes, Frode Eika

    2015-01-01

    Introduction: Few studies document the information searching behaviour of users with cognitive impairments. This paper therefore addresses the effect of dyslexia on information searching in a database with no tolerance for spelling errors and no query-building aids. The purpose was to identify effective search interface design guidelines that…

  17. Space Station Human Factors Research Review. Volume 4: Inhouse Advanced Development and Research

    NASA Technical Reports Server (NTRS)

    Tanner, Trieve (Editor); Clearwater, Yvonne A. (Editor); Cohen, Marc M. (Editor)

    1988-01-01

    A variety of human factors studies related to space station design are presented. Subjects include proximity operations and window design, spatial perceptual issues regarding displays, image management, workload research, spatial cognition, virtual interface, fault diagnosis in orbital refueling, and error tolerance and procedure aids.

  18. Requirements for fault-tolerant factoring on an atom-optics quantum computer.

    PubMed

    Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae

    2013-01-01

    Quantum information processing and its associated technologies have reached a pivotal stage in their development, with many experiments having established the basic building blocks. Moving forward, the challenge is to scale up to larger machines capable of performing computational tasks not possible today. This raises questions that need to be urgently addressed, such as what resources these machines will consume and how large will they be. Here we estimate the resources required to execute Shor's factoring algorithm on an atom-optics quantum computer architecture. We determine the runtime and size of the computer as a function of the problem size and physical error rate. Our results suggest that once the physical error rate is low enough to allow quantum error correction, optimization to reduce resources and increase performance will come mostly from integrating algorithms and circuits within the error correction environment, rather than from improving the physical hardware.

  19. Overexpression of an Arabidopsis thaliana galactinol synthase gene improves drought tolerance in transgenic rice and increased grain yield in the field.

    PubMed

    Selvaraj, Michael Gomez; Ishizaki, Takuma; Valencia, Milton; Ogawa, Satoshi; Dedicova, Beata; Ogata, Takuya; Yoshiwara, Kyouko; Maruyama, Kyonoshin; Kusano, Miyako; Saito, Kazuki; Takahashi, Fuminori; Shinozaki, Kazuo; Nakashima, Kazuo; Ishitani, Manabu

    2017-11-01

    Drought stress has often caused significant decreases in crop production which could be associated with global warming. Enhancing drought tolerance without a grain yield penalty has been a great challenge in crop improvement. Here, we report the Arabidopsis thaliana galactinol synthase 2 gene (AtGolS2) was able to confer drought tolerance and increase grain yield in two different rice (Oryza sativa) genotypes under dry field conditions. The developed transgenic lines expressing AtGolS2 under the control of the constitutive maize ubiquitin promoter (Ubi:AtGolS2) also had higher levels of galactinol than the non-transgenic control. The increased grain yield of the transgenic rice under drought conditions was related to a higher number of panicles, grain fertility and biomass. Extensive confined field trials using Ubi:AtGolS2 transgenic lines in Curinga, tropical japonica and NERICA4, interspecific hybrid across two different seasons and environments revealed the verified lines have the proven field drought tolerance of the Ubi:AtGolS2 transgenic rice. The amended drought tolerance was associated with higher relative water content of leaves, higher photosynthesis activity, lesser reduction in plant growth and faster recovering ability. Collectively, our results provide strong evidence that AtGolS2 is a useful biotechnological tool to reduce grain yield losses in rice beyond genetic differences under field drought stress. © 2017 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  20. Certification of computational results

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.

    1993-01-01

    A conceptually novel and powerful technique to achieve fault detection and fault tolerance in hardware and software systems is described. When used for software fault detection, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are compared and if they agree the results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance is formalized and realizations of it are illustrated by considering algorithms for the following problems: convex hull, sorting, and shortest path. Cases in which the second phase can be run concurrently with the first and act as a monitor are discussed. The certification trail approach are compared to other approaches to fault tolerance.

Top