Sample records for experimental error limits

  1. How to Cope with Gauss's Errors? Motivation for Teaching Data and Uncertainty Analysis from a History of Science Perspective

    ERIC Educational Resources Information Center

    Heinicke, Susanne

    2014-01-01

    Every measurement in science, every experimental decision, result and information drawn from it has to cope with something that has long been named by the term "error". In fact, errors describe our limitations when it comes to experimental science and science looks back on a long tradition to cope with them. The widely known way to cope…

  2. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.

    PubMed

    White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K

    2016-12-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.

  3. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  4. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    NASA Astrophysics Data System (ADS)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  5. The use of a covariate reduces experimental error in nutrient digestion studies in growing pigs

    USDA-ARS?s Scientific Manuscript database

    Covariance analysis limits error, the degree of nuisance variation, and overparameterizing factors to accurately measure treatment effects. Data dealing with growth, carcass composition, and genetics often utilize covariates in data analysis. In contrast, nutritional studies typically do not. The ob...

  6. Effect of error field correction coils on W7-X limiter loads

    NASA Astrophysics Data System (ADS)

    Bozhenkov, S. A.; Jakubowski, M. W.; Niemann, H.; Lazerson, S. A.; Wurden, G. A.; Biedermann, C.; Kocsis, G.; König, R.; Pisano, F.; Stephey, L.; Szepesi, T.; Wenzel, U.; Pedersen, T. S.; Wolf, R. C.; W7-X Team

    2017-12-01

    In the first campaign Wendelstein 7-X was operated with five poloidal graphite limiters installed stellarator symmetrically. In an ideal situation the power losses would be equally distributed between the limiters. The limiter shape was designed to smoothly distribute the heat flux over two strike lines. Vertically the strike lines are not uniform because of different connection lengths. In this paper it is demonstrated both numerically and experimentally that the heat flux distribution can be significantly changed by non-resonant n=1 perturbation field of the order of 10-4 . Numerical studies are performed with field line tracing. In experiments perturbation fields are excited with five error field trim coils. The limiters are diagnosed with infrared cameras, neutral gas pressure gauges, thermocouples and spectroscopic diagnostics. Experimental results are qualitatively consistent with the simulations. With a suitable choice of the phase and amplitude of the perturbation a more symmetric plasma-limiter interaction can be potentially achieved. These results are also of interest for the later W7-X divertor operation.

  7. Implementation of an experimental program to investigate the performance characteristics of OMEGA navigation

    NASA Technical Reports Server (NTRS)

    Baxa, E. G., Jr.

    1974-01-01

    A theoretical formulation of differential and composite OMEGA error is presented to establish hypotheses about the functional relationships between various parameters and OMEGA navigational errors. Computer software developed to provide for extensive statistical analysis of the phase data is described. Results from the regression analysis used to conduct parameter sensitivity studies on differential OMEGA error tend to validate the theoretically based hypothesis concerning the relationship between uncorrected differential OMEGA error and receiver separation range and azimuth. Limited results of measurement of receiver repeatability error and line of position measurement error are also presented.

  8. Quantum Error Correction for Metrology

    NASA Astrophysics Data System (ADS)

    Sushkov, Alex; Kessler, Eric; Lovchinsky, Igor; Lukin, Mikhail

    2014-05-01

    The question of the best achievable sensitivity in a quantum measurement is of great experimental relevance, and has seen a lot of attention in recent years. Recent studies [e.g., Nat. Phys. 7, 406 (2011), Nat. Comms. 3, 1063 (2012)] suggest that in most generic scenarios any potential quantum gain (e.g. through the use of entangled states) vanishes in the presence of environmental noise. To overcome these limitations, we propose and analyze a new approach to improve quantum metrology based on quantum error correction (QEC). We identify the conditions under which QEC allows one to improve the signal-to-noise ratio in quantum-limited measurements, and we demonstrate that it enables, in certain situations, Heisenberg-limited sensitivity. We discuss specific applications to nanoscale sensing using nitrogen-vacancy centers in diamond in which QEC can significantly improve the measurement sensitivity and bandwidth under realistic experimental conditions.

  9. The search for causal inferences: using propensity scores post hoc to reduce estimation error with nonexperimental research.

    PubMed

    Tumlinson, Samuel E; Sass, Daniel A; Cano, Stephanie M

    2014-03-01

    While experimental designs are regarded as the gold standard for establishing causal relationships, such designs are usually impractical owing to common methodological limitations. The objective of this article is to illustrate how propensity score matching (PSM) and using propensity scores (PS) as a covariate are viable alternatives to reduce estimation error when experimental designs cannot be implemented. To mimic common pediatric research practices, data from 140 simulated participants were used to resemble an experimental and nonexperimental design that assessed the effect of treatment status on participant weight loss for diabetes. Pretreatment participant characteristics (age, gender, physical activity, etc.) were then used to generate PS for use in the various statistical approaches. Results demonstrate how PSM and using the PS as a covariate can be used to reduce estimation error and improve statistical inferences. References for issues related to the implementation of these procedures are provided to assist researchers.

  10. Ptychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methods.

    PubMed

    Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G

    2014-01-27

    Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.

  11. Optimal Objective-Based Experimental Design for Uncertain Dynamical Gene Networks with Experimental Error.

    PubMed

    Mohsenizadeh, Daniel N; Dehghannasiri, Roozbeh; Dougherty, Edward R

    2018-01-01

    In systems biology, network models are often used to study interactions among cellular components, a salient aim being to develop drugs and therapeutic mechanisms to change the dynamical behavior of the network to avoid undesirable phenotypes. Owing to limited knowledge, model uncertainty is commonplace and network dynamics can be updated in different ways, thereby giving multiple dynamic trajectories, that is, dynamics uncertainty. In this manuscript, we propose an experimental design method that can effectively reduce the dynamics uncertainty and improve performance in an interaction-based network. Both dynamics uncertainty and experimental error are quantified with respect to the modeling objective, herein, therapeutic intervention. The aim of experimental design is to select among a set of candidate experiments the experiment whose outcome, when applied to the network model, maximally reduces the dynamics uncertainty pertinent to the intervention objective.

  12. Experimental investigation of control/display augmentation effects in a compensatory tracking task

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Schmidt, David K.

    1988-01-01

    The effects of control/display augmentation on human performance and workload have been investigated for closed-loop, continuous-tracking tasks by a real-time, man-in-the-loop simulation study. The experimental results obtained indicate that only limited improvement in actual tracking performance is obtainable through display augmentation alone; with a very high level of display augmentation, tracking error will actually deteriorate. Tracking performance improves when status information is furnished for reasonable levels of display quickening; again, very high quickening levels lead to tracking error deterioration due to the incompatibility between the status information and the quickened signal.

  13. Optimization of the moving-bed biofilm sequencing batch reactor (MBSBR) to control aeration time by kinetic computational modeling: Simulated sugar-industry wastewater treatment.

    PubMed

    Faridnasr, Maryam; Ghanbari, Bastam; Sassani, Ardavan

    2016-05-01

    A novel approach was applied for optimization of a moving-bed biofilm sequencing batch reactor (MBSBR) to treat sugar-industry wastewater (BOD5=500-2500 and COD=750-3750 mg/L) at 2-4 h of cycle time (CT). Although the experimental data showed that MBSBR reached high BOD5 and COD removal performances, it failed to achieve the standard limits at the mentioned CTs. Thus, optimization of the reactor was rendered by kinetic computational modeling and using statistical error indicator normalized root mean square error (NRMSE). The results of NRMSE revealed that Stover-Kincannon (error=6.40%) and Grau (error=6.15%) models provide better fits to the experimental data and may be used for CT optimization in the reactor. The models predicted required CTs of 4.5, 6.5, 7 and 7.5 h for effluent standardization of 500, 1000, 1500 and 2500 mg/L influent BOD5 concentrations, respectively. Similar pattern of the experimental data also confirmed these findings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Statistical evaluation of accelerated stability data obtained at a single temperature. I. Effect of experimental errors in evaluation of stability data obtained.

    PubMed

    Yoshioka, S; Aso, Y; Takeda, Y

    1990-06-01

    Accelerated stability data obtained at a single temperature is statistically evaluated, and the utility of such data for assessment of stability is discussed focussing on the chemical stability of solution-state dosage forms. The probability that the drug content of a product is observed to be within the lower specification limit in the accelerated test is interpreted graphically. This probability depends on experimental errors in the assay and temperature control, as well as the true degradation rate and activation energy. Therefore, the observation that the drug content meets the specification in the accelerated testing can provide only limited information on the shelf-life of the drug, without the knowledge of the activation energy and the accuracy and precision of the assay and temperature control.

  15. Linear optical quantum metrology with single photons: Experimental errors, resource counting, and quantum Cramér-Rao bounds

    NASA Astrophysics Data System (ADS)

    Olson, Jonathan P.; Motes, Keith R.; Birchall, Patrick M.; Studer, Nick M.; LaBorde, Margarite; Moulder, Todd; Rohde, Peter P.; Dowling, Jonathan P.

    2017-07-01

    Quantum number-path entanglement is a resource for supersensitive quantum metrology and in particular provides for sub-shot-noise or even Heisenberg-limited sensitivity. However, such number-path entanglement is thought to have been resource intensive to create in the first place, typically requiring either very strong nonlinearities or nondeterministic preparation schemes with feedforward, which are difficult to implement. Recently [K. R. Motes et al., Phys. Rev. Lett. 114, 170802 (2015), 10.1103/PhysRevLett.114.170802], it was shown that number-path entanglement from a BosonSampling inspired interferometer can be used to beat the shot-noise limit. In this paper we compare and contrast different interferometric schemes, discuss resource counting, calculate exact quantum Cramér-Rao bounds, and study details of experimental errors.

  16. Statistics is not enough: revisiting Ronald A. Fisher's critique (1936) of Mendel's experimental results (1866).

    PubMed

    Pilpel, Avital

    2007-09-01

    This paper is concerned with the role of rational belief change theory in the philosophical understanding of experimental error. Today, philosophers seek insight about error in the investigation of specific experiments, rather than in general theories. Nevertheless, rational belief change theory adds to our understanding of just such cases: R. A. Fisher's criticism of Mendel's experiments being a case in point. After an historical introduction, the main part of this paper investigates Fisher's paper from the point of view of rational belief change theory: what changes of belief about Mendel's experiment does Fisher go through and with what justification. It leads to surprising insights about what Fisher had done right and wrong, and, more generally, about the limits of statistical methods in detecting error.

  17. Practical issues in ultrashort-laser-pulse measurement using frequency-resolved optical gating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLong, K.W.; Fittinghoff, D.N.; Trebino, R.

    1996-07-01

    The authors explore several practical experimental issues in measuring ultrashort laser pulses using the technique of frequency-resolved optical gating (FROG). They present a simple method for checking the consistency of experimentally measured FROG data with the independently measured spectrum and autocorrelation of the pulse. This method is a powerful way of discovering systematic errors in FROG experiments. They show how to determine the optimum sampling rate for FROG and show that this satisfies the Nyquist criterion for the laser pulse. They explore the low- and high-power limits to FROG and determine that femtojoule operation should be possible, while the effectsmore » of self-phase modulation limit the highest signal efficiency in FROG to 1%. They also show quantitatively that the temporal blurring due to a finite-thickness medium in single-shot geometries does not strongly limit the FROG technique. They explore the limiting time-bandwidth values that can be represented on a FROG trace of a given size. Finally, they report on a new measure of the FROG error that improves convergence in the presence of noise.« less

  18. A systematic comparison of error correction enzymes by next-generation sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.

    Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less

  19. A systematic comparison of error correction enzymes by next-generation sequencing

    DOE PAGES

    Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...

    2017-08-01

    Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less

  20. Errors induced by catalytic effects in premixed flame temperature measurements

    NASA Astrophysics Data System (ADS)

    Pita, G. P. A.; Nina, M. N. R.

    The evaluation of instantaneous temperature in a premixed flame using fine-wire Pt/Pt-(13 pct)Rh thermocouples was found to be subject to significant errors due to catalytic effects. An experimental study was undertaken to assess the influence of local fuel/air ratio, thermocouple wire diameter, and gas velocity on the thermocouple reading errors induced by the catalytic surface reactions. Measurements made with both coated and uncoated thermocouples showed that the catalytic effect imposes severe limitations on the accuracy of mean and fluctuating gas temperature in the radical-rich flame zone.

  1. Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.

    PubMed

    Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D

    2018-04-07

    We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. When linearity prevails over hierarchy in syntax

    PubMed Central

    Willer Gold, Jana; Arsenijević, Boban; Batinić, Mia; Becker, Michael; Čordalija, Nermina; Kresić, Marijana; Leko, Nedžad; Marušič, Franc Lanko; Milićev, Tanja; Milićević, Nataša; Mitić, Ivana; Peti-Stantić, Anita; Stanković, Branimir; Šuligoj, Tina; Tušek, Jelena; Nevins, Andrew

    2018-01-01

    Hierarchical structure has been cherished as a grammatical universal. We use experimental methods to show where linear order is also a relevant syntactic relation. An identical methodology and design were used across six research sites on South Slavic languages. Experimental results show that in certain configurations, grammatical production can in fact favor linear order over hierarchical structure. However, these findings are limited to coordinate structures and distinct from the kind of production errors found with comparable configurations such as “attraction” errors. The results demonstrate that agreement morphology may be computed in a series of steps, one of which is partly independent from syntactic hierarchy. PMID:29288218

  3. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    NASA Astrophysics Data System (ADS)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  4. GURU v2.0: An interactive Graphical User interface to fit rheometer curves in Han's model for rubber vulcanization

    NASA Astrophysics Data System (ADS)

    Milani, G.; Milani, F.

    A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.

  5. High spatial precision nano-imaging of polarization-sensitive plasmonic particles

    NASA Astrophysics Data System (ADS)

    Liu, Yunbo; Wang, Yipei; Lee, Somin Eunice

    2018-02-01

    Precise polarimetric imaging of polarization-sensitive nanoparticles is essential for resolving their accurate spatial positions beyond the diffraction limit. However, conventional technologies currently suffer from beam deviation errors which cannot be corrected beyond the diffraction limit. To overcome this issue, we experimentally demonstrate a spatially stable nano-imaging system for polarization-sensitive nanoparticles. In this study, we show that by integrating a voltage-tunable imaging variable polarizer with optical microscopy, we are able to suppress beam deviation errors. We expect that this nano-imaging system should allow for acquisition of accurate positional and polarization information from individual nanoparticles in applications where real-time, high precision spatial information is required.

  6. Bias error reduction using ratios to baseline experiments. Heat transfer case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakroun, W.; Taylor, R.P.; Coleman, H.W.

    1993-10-01

    Employing a set of experiments devoted to examining the effect of surface finish (riblets) on convective heat transfer as an example, this technical note seeks to explore the notion that precision uncertainties in experiments can be reduced by repeated trials and averaging. This scheme for bias error reduction can give considerable advantage when parametric effects are investigated experimentally. When the results of an experiment are presented as a ratio with the baseline results, a large reduction in the overall uncertainty can be achieved when all the bias limits in the variables of the experimental result are fully correlated with thosemore » of the baseline case. 4 refs.« less

  7. Quantum error correction in crossbar architectures

    NASA Astrophysics Data System (ADS)

    Helsen, Jonas; Steudtner, Mark; Veldhorst, Menno; Wehner, Stephanie

    2018-07-01

    A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.

  8. Quantum-state anomaly detection for arbitrary errors using a machine-learning technique

    NASA Astrophysics Data System (ADS)

    Hara, Satoshi; Ono, Takafumi; Okamoto, Ryo; Washio, Takashi; Takeuchi, Shigeki

    2016-10-01

    The accurate detection of small deviations in given density matrice is important for quantum information processing, which is a difficult task because of the intrinsic fluctuation in density matrices reconstructed using a limited number of experiments. We previously proposed a method for decoherence error detection using a machine-learning technique [S. Hara, T. Ono, R. Okamoto, T. Washio, and S. Takeuchi, Phys. Rev. A 89, 022104 (2014), 10.1103/PhysRevA.89.022104]. However, the previous method is not valid when the errors are just changes in phase. Here, we propose a method that is valid for arbitrary errors in density matrices. The performance of the proposed method is verified using both numerical simulation data and real experimental data.

  9. Measuring the Utility of a Cyber Incident Mission Impact Assessment (CIMIA) Process for Mission Assurance

    DTIC Science & Technology

    2011-03-01

    1.179 1 22 .289 POP-UP .000 1 22 .991 Tests the null hypothesis that the error variance of the dependent variable is equal across groups. a. Design ...POP-UP 2.104 1 22 .161 Tests the null hypothesis that the error variance of the dependent variable is equal across groups. a. Design : Intercept... design also limited the number of intended treatments. The experimental design originally was suppose to test all three adverse events that threaten

  10. Optimal joint measurements of complementary observables by a single trapped ion

    NASA Astrophysics Data System (ADS)

    Xiong, T. P.; Yan, L. L.; Ma, Z. H.; Zhou, F.; Chen, L.; Yang, W. L.; Feng, M.; Busch, P.

    2017-06-01

    The uncertainty relations, pioneered by Werner Heisenberg nearly 90 years ago, set a fundamental limitation on the joint measurability of complementary observables. This limitation has long been a subject of debate, which has been reignited recently due to new proposed forms of measurement uncertainty relations. The present work is associated with a new error trade-off relation for compatible observables approximating two incompatible observables, in keeping with the spirit of Heisenberg’s original ideas of 1927. We report the first direct test and confirmation of the tight bounds prescribed by such an error trade-off relation, based on an experimental realisation of optimal joint measurements of complementary observables using a single ultracold {}40{{{Ca}}}+ ion trapped in a harmonic potential. Our work provides a prototypical determination of ultimate joint measurement error bounds with potential applications in quantum information science for high-precision measurement and information security.

  11. Technical Note: Millimeter precision in ultrasound based patient positioning: Experimental quantification of inherent technical limitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun

    2014-08-15

    Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less

  12. Five-wave-packet quantum error correction based on continuous-variable cluster entanglement

    PubMed Central

    Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi

    2015-01-01

    Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395

  13. MRMPlus: an open source quality control and assessment tool for SRM/MRM assay development.

    PubMed

    Aiyetan, Paul; Thomas, Stefani N; Zhang, Zhen; Zhang, Hui

    2015-12-12

    Selected and multiple reaction monitoring involves monitoring a multiplexed assay of proteotypic peptides and associated transitions in mass spectrometry runs. To describe peptide and associated transitions as stable, quantifiable, and reproducible representatives of proteins of interest, experimental and analytical validation is required. However, inadequate and disparate analytical tools and validation methods predispose assay performance measures to errors and inconsistencies. Implemented as a freely available, open-source tool in the platform independent Java programing language, MRMPlus computes analytical measures as recommended recently by the Clinical Proteomics Tumor Analysis Consortium Assay Development Working Group for "Tier 2" assays - that is, non-clinical assays sufficient enough to measure changes due to both biological and experimental perturbations. Computed measures include; limit of detection, lower limit of quantification, linearity, carry-over, partial validation of specificity, and upper limit of quantification. MRMPlus streamlines assay development analytical workflow and therefore minimizes error predisposition. MRMPlus may also be used for performance estimation for targeted assays not described by the Assay Development Working Group. MRMPlus' source codes and compiled binaries can be freely downloaded from https://bitbucket.org/paiyetan/mrmplusgui and https://bitbucket.org/paiyetan/mrmplusgui/downloads respectively.

  14. Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection.

    PubMed

    Gürsoy, Doğa; Hong, Young P; He, Kuan; Hujsak, Karl; Yoo, Seunghwan; Chen, Si; Li, Yue; Ge, Mingyuan; Miller, Lisa M; Chu, Yong S; De Andrade, Vincent; He, Kai; Cossairt, Oliver; Katsaggelos, Aggelos K; Jacobsen, Chris

    2017-09-18

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gürsoy, Doğa; Hong, Young P.; He, Kuan

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less

  16. Computational estimation of errors generated by lumping of physiologically-based pharmacokinetic (PBPK) interaction models of inhaled complex chemical mixtures

    EPA Science Inventory

    Many cases of environmental contamination result in concurrent or sequential exposure to more than one chemical. However, limitations of available resources make it unlikely that experimental toxicology will provide health risk information about all the possible mixtures to which...

  17. Experimental study on performance verification tests for coordinate measuring systems with optical distance sensors

    NASA Astrophysics Data System (ADS)

    Carmignato, Simone

    2009-01-01

    Optical sensors are increasingly used for dimensional and geometrical metrology. However, the lack of international standards for testing optical coordinate measuring systems is currently limiting the traceability of measurements and the easy comparison of different optical systems. This paper presents an experimental investigation on artefacts and procedures for testing coordinate measuring systems equipped with optical distance sensors. The work is aimed at contributing to the standardization of testing methods. The VDI/VDE 2617-6.2:2005 guideline, which is probably the most complete document available at the state of the art for testing systems with optical distance sensors, is examined with specific experiments. Results from the experiments are discussed, with particular reference to the tests used for determining the following characteristics: error of indication for size measurement, probing error and structural resolution. Particular attention is given to the use of artefacts alternative to gauge blocks for determining the error of indication for size measurement.

  18. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    PubMed

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  19. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    PubMed

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  20. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    PubMed

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  1. Estimation of reflectance from camera responses by the regularized local linear model.

    PubMed

    Zhang, Wei-Feng; Tang, Gongguo; Dai, Dao-Qing; Nehorai, Arye

    2011-10-01

    Because of the limited approximation capability of using fixed basis functions, the performance of reflectance estimation obtained by traditional linear models will not be optimal. We propose an approach based on the regularized local linear model. Our approach performs efficiently and knowledge of the spectral power distribution of the illuminant and the spectral sensitivities of the camera is not needed. Experimental results show that the proposed method performs better than some well-known methods in terms of both reflectance error and colorimetric error. © 2011 Optical Society of America

  2. Transrectal Near-Infrared Optical Tomography for Prostate Imaging

    DTIC Science & Technology

    2011-03-01

    when the experimental measurements are grouped with the FEM and the MC for examining the analytic predictions. Section 5 examines the analytic...as well as other experimental limitations, but the error was controlled to be within 0:9mm for the case-azi and 0:5mm for the case- longi...be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number

  3. Error field measurement, correction and heat flux balancing on Wendelstein 7-X

    DOE PAGES

    Lazerson, Samuel A.; Otte, Matthias; Jakubowski, Marcin; ...

    2017-03-10

    The measurement and correction of error fields in Wendelstein 7-X (W7-X) is critical to long pulse high beta operation, as small error fields may cause overloading of divertor plates in some configurations. Accordingly, as part of a broad collaborative effort, the detection and correction of error fields on the W7-X experiment has been performed using the trim coil system in conjunction with the flux surface mapping diagnostic and high resolution infrared camera. In the early commissioning phase of the experiment, the trim coils were used to open an n/m = 1/2 island chain in a specially designed magnetic configuration. Themore » flux surfacing mapping diagnostic was then able to directly image the magnetic topology of the experiment, allowing the inference of a small similar to 4 cm intrinsic island chain. The suspected main sources of the error field, slight misalignment and deformations of the superconducting coils, are then confirmed through experimental modeling using the detailed measurements of the coil positions. Observations of the limiters temperatures in module 5 shows a clear dependence of the limiter heat flux pattern as the perturbing fields are rotated. Plasma experiments without applied correcting fields show a significant asymmetry in neutral pressure (centered in module 4) and light emission (visible, H-alpha, CII, and CIII). Such pressure asymmetry is associated with plasma-wall (limiter) interaction asymmetries between the modules. Application of trim coil fields with n = 1 waveform correct the imbalance. Confirmation of the error fields allows the assessment of magnetic fields which resonate with the n/m = 5/5 island chain.« less

  4. Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber

    DOE PAGES

    Acciarri, R.; Adams, C.; Asaadi, J.; ...

    2017-03-09

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less

  5. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    PubMed

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  6. Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection

    DOE PAGES

    Gürsoy, Doğa; Hong, Young P.; He, Kuan; ...

    2017-09-18

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less

  7. Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; Asaadi, J.

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less

  8. Absolute emission cross sections for electron capture reactions of C2+, N3+, N4+ and O3+ ions in collisions with Li(2s) atoms

    NASA Astrophysics Data System (ADS)

    Rieger, G.; Pinnington, E. H.; Ciubotariu, C.

    2000-12-01

    Absolute photon emission cross sections following electron capture reactions have been measured for C2+, N3+, N4+ and O3+ ions colliding with Li(2s) atoms at keV energies. The results are compared with calculations using the extended classical over-the-barrier model by Niehaus. We explore the limits of our experimental method and present a detailed discussion of experimental errors.

  9. Nonadiabatic fluctuation in the measured geometric phase

    NASA Astrophysics Data System (ADS)

    Ai, Qing; Huo, Wenyi; Long, Gui Lu; Sun, C. P.

    2009-08-01

    We study how the nonadiabatic effect causes the observable fluctuation in the “geometric phase” for a two-level system, which is defined as the experimentally measurable quantity in the adiabatic limit. From the Rabi exact solution to this model, we give a reasonable explanation to the experimental discovery of phase fluctuation in the superconducting circuit system [P. J. Leek, J. M. Fink, A. Blais, R. Bianchetti, M. Göppl, J. M. Gambetta, D. I. Schuster, L. Frunzio, R. J. Schoelkopf, and A. Wallraf, Science 318, 1889 (2007)], which seemed to be regarded as the conventional experimental error.

  10. The structure and energetics of Cr(CO)6 and Cr(CO)5

    NASA Technical Reports Server (NTRS)

    Barnes, Leslie A.; Liu, Bowen; Lindh, Roland

    1992-01-01

    The geometric structure of Cr(CO)6 is optimized at the modified coupled pair functional (MCPF), single and double excitation coupled-cluster (CCSD) and CCSD(T) levels of theory (including a perturbational estimate for connected triple excitations), and the force constants for the totally symmetric representation are determined. The geometry of Cr(CO)5 is partially optimized at the MCPF, CCSD, and CCSD(T) levels of theory. Comparison with experimental data shows that the CCSD(T) method gives the best results for the structures and force constants, and that remaining errors are probably due to deficiencies in the one-particle basis sets used for CO. The total binding energies of Cr(CO)6 and Cr(CO)5 are also determined at the MCPF, CCSD, and CCSD(T) levels of theory. The CCSD(T) method gives a much larger total binding energy than either the MCPF or CCSD methods. An analysis of the basis set superposition error (BSSE) at the MCPF level of treatment points out limitations in the one-particle basis used. Calculations using larger basis sets reduce the BSSE, but the total binding energy of Cr(CO)6 is still significantly smaller than the experimental value, although the first CO bond dissociation energy of Cr(CO)6 is well described. An investigation of 3s3p correlation reveals only a small effect. In the largest basis set, the total CO binding energy of Cr(CO)6 is estimated to be 140 kcal/mol at the CCSD(T) level of theory, or about 86 percent of the experimental value. The remaining discrepancy between the experimental and theoretical value is probably due to limitations in the one-particle basis, rather than limitations in the correlation treatment. In particular an additional d function and an f function on each C and O are needed to obtain quantitative results. This is underscored by the fact that even using a very large primitive set (1042 primitive functions contracted to 300 basis functions), the superposition error for the total binding energy of Cr(CO)6 is 22 kcal/mol at the MCPF level of treatment.

  11. Optical injection phase-lock loops

    NASA Astrophysics Data System (ADS)

    Bordonalli, Aldario Chrestani

    Locking techniques have been widely applied for frequency synchronisation of semiconductor lasers used in coherent communication and microwave signal generation systems. Two main locking techniques, the optical phase-lock loop (OPLL) and optical injection locking (OIL) are analysed in this thesis. The principal limitations on OPLL performance result from the loop propagation delay, which makes difficult the implementation of high gain and wide bandwidth loops, leading to poor phase noise suppression performance and requiring the linewidths of the semiconductor laser sources to be less than a few megahertz for practical values of loop delay. The OIL phase noise suppression is controlled by the injected power. The principal limitations of the OIL implementation are the finite phase error under locked conditions and the narrow stable locking range the system provides at injected power levels required to reduce the phase noise output of semiconductor lasers significantly. This thesis demonstrates theoretically and experimentally that it is possible to overcome the limitations of OPLL and OIL systems by combining them, to form an optical injection phase-lock loop (OIPLL). The modelling of an OIPLL system is presented and compared with the equivalent OPLL and OIL results. Optical and electrical design of an homodyne OIPLL is detailed. Experimental results are given which verify the theoretical prediction that the OIPLL would keep the phase noise suppression as high as that of the OIL system over a much wider stable locking range, even with wide linewidth lasers and long loop delays. The experimental results for lasers with summed linewidth of 36 MHz and a loop delay of 15 ns showed measured phase error variances as low as 0.006 rad2 (500 MHz bandwidth) for locking bandwidths greater than 26 GHz, compared with the equivalent OPLL phase error variance of around 1 rad2 (500 MHz bandwidth) and the equivalent OIL locking bandwidth of less than 1.2 GHz.

  12. Experimental quantum verification in the presence of temporally correlated noise

    NASA Astrophysics Data System (ADS)

    Mavadia, S.; Edmunds, C. L.; Hempel, C.; Ball, H.; Roy, F.; Stace, T. M.; Biercuk, M. J.

    2018-02-01

    Growth in the capabilities of quantum information hardware mandates access to techniques for performance verification that function under realistic laboratory conditions. Here we experimentally characterise the impact of common temporally correlated noise processes on both randomised benchmarking (RB) and gate-set tomography (GST). Our analysis highlights the role of sequence structure in enhancing or suppressing the sensitivity of quantum verification protocols to either slowly or rapidly varying noise, which we treat in the limiting cases of quasi-DC miscalibration and white noise power spectra. We perform experiments with a single trapped 171Yb+ ion-qubit and inject engineered noise (" separators="∝σ^ z ) to probe protocol performance. Experiments on RB validate predictions that measured fidelities over sequences are described by a gamma distribution varying between approximately Gaussian, and a broad, highly skewed distribution for rapidly and slowly varying noise, respectively. Similarly we find a strong gate set dependence of default experimental GST procedures in the presence of correlated errors, leading to significant deviations between estimated and calculated diamond distances in the presence of correlated σ^ z errors. Numerical simulations demonstrate that expansion of the gate set to include negative rotations can suppress these discrepancies and increase reported diamond distances by orders of magnitude for the same error processes. Similar effects do not occur for correlated σ^ x or σ^ y errors or depolarising noise processes, highlighting the impact of the critical interplay of selected gate set and the gauge optimisation process on the meaning of the reported diamond norm in correlated noise environments.

  13. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Magnetic constraints on early lunar evolution revisited: Limits on accuracy imposed by methods of paleointensity measurements

    NASA Technical Reports Server (NTRS)

    Banerjee, S. K.

    1984-01-01

    It is impossible to carry out conventional paleointensity experiments requiring repeated heating and cooling to 770 C without chemical, physical or microstructural changes on lunar samples. Non-thermal methods of paleointensity determination have been sought: the two anhysteretic remanent magnetization (ARM) methods, and the saturation isothermal remanent magnetization (IRMS) method. Experimental errors inherent in these alternative approaches have been investigated to estimate the accuracy limits on the calculated paleointensities. Results are indicated in this report.

  15. Temporal Correlations and Neural Spike Train Entropy

    NASA Astrophysics Data System (ADS)

    Schultz, Simon R.; Panzeri, Stefano

    2001-06-01

    Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.

  16. Characterizing Protease Specificity: How Many Substrates Do We Need?

    PubMed Central

    Schauperl, Michael; Fuchs, Julian E.; Waldner, Birgit J.; Huber, Roland G.; Kramer, Christian; Liedl, Klaus R.

    2015-01-01

    Calculation of cleavage entropies allows to quantify, map and compare protease substrate specificity by an information entropy based approach. The metric intrinsically depends on the number of experimentally determined substrates (data points). Thus a statistical analysis of its numerical stability is crucial to estimate the systematic error made by estimating specificity based on a limited number of substrates. In this contribution, we show the mathematical basis for estimating the uncertainty in cleavage entropies. Sets of cleavage entropies are calculated using experimental cleavage data and modeled extreme cases. By analyzing the underlying mathematics and applying statistical tools, a linear dependence of the metric in respect to 1/n was found. This allows us to extrapolate the values to an infinite number of samples and to estimate the errors. Analyzing the errors, a minimum number of 30 substrates was found to be necessary to characterize substrate specificity, in terms of amino acid variability, for a protease (S4-S4’) with an uncertainty of 5 percent. Therefore, we encourage experimental researchers in the protease field to record specificity profiles of novel proteases aiming to identify at least 30 peptide substrates of maximum sequence diversity. We expect a full characterization of protease specificity helpful to rationalize biological functions of proteases and to assist rational drug design. PMID:26559682

  17. Research on error control and compensation in magnetorheological finishing.

    PubMed

    Dai, Yifan; Hu, Hao; Peng, Xiaoqiang; Wang, Jianmin; Shi, Feng

    2011-07-01

    Although magnetorheological finishing (MRF) is a deterministic finishing technology, the machining results always fall short of simulation precision in the actual process, and it cannot meet the precision requirements just through a single treatment but after several iterations. We investigate the reasons for this problem through simulations and experiments. Through controlling and compensating the chief errors in the manufacturing procedure, such as removal function calculation error, positioning error of the removal function, and dynamic performance limitation of the CNC machine, the residual error convergence ratio (ratio of figure error before and after processing) in a single process is obviously increased, and higher figure precision is achieved. Finally, an improved technical process is presented based on these researches, and the verification experiment is accomplished on the experimental device we developed. The part is a circular plane mirror of fused silica material, and the surface figure error is improved from the initial λ/5 [peak-to-valley (PV) λ=632.8 nm], λ/30 [root-mean-square (rms)] to the final λ/40 (PV), λ/330 (rms) just through one iteration in 4.4 min. Results show that a higher convergence ratio and processing precision can be obtained by adopting error control and compensation techniques in MRF.

  18. Extension of sonic anemometry to high subsonic Mach number flows

    NASA Astrophysics Data System (ADS)

    Otero, R.; Lowe, K. T.; Ng, W. F.

    2017-03-01

    In the literature, the application of sonic anemometry has been limited to low subsonic Mach number, near-incompressible flow conditions. To the best of the authors’ knowledge, this paper represents the first time a sonic anemometry approach has been used to characterize flow velocity beyond Mach 0.3. Using a high speed jet, flow velocity was measured using a modified sonic anemometry technique in flow conditions up to Mach 0.83. A numerical study was conducted to identify the effects of microphone placement on the accuracy of the measured velocity. Based on estimated error strictly due to uncertainty in time-of-acoustic flight, a random error of +/- 4 m s-1 was identified for the configuration used in this experiment. Comparison with measurements from a Pitot probe indicated a velocity RMS error of +/- 9 m s-1. The discrepancy in error is attributed to a systematic error which may be calibrated out in future work. Overall, the experimental results from this preliminary study support the use of acoustics for high subsonic flow characterization.

  19. X-ray natural widths, level widths and Coster-Kronig transition probabilities

    NASA Astrophysics Data System (ADS)

    Papp, T.; Campbell, J. L.; Varga, D.

    1997-01-01

    A critical review is given for the K-N7 atomic level widths. The experimental level widths were collected from x-ray photoelectron spectroscopy (XPS), x-ray emission spectroscopy (XES), x-ray spectra fluoresced by synchrotron radiation, and photoelectrons from x-ray absorption (PAX). There are only limited atomic number ranges for a few atomic levels where data are available from more than one source. Generally the experimental level widths have large scatter compared to the reported error bars. The experimental data are compared with the recent tabulation of Perkins et al. and of Ohno et al. Ohno et al. performed a many body approach calculation for limited atomic number ranges and have obtained reasonable agreement with the experimental data. Perkins et al. presented a tabulation covering the K-Q1 shells of all atoms, based on extensions of the Scofield calculations for radiative rates and extensions of the Chen calculations for non-radiative rates. The experimental data are in disagreement with this tabulation, in excess of a factor of two in some cases. A short introduction to the experimental Coster-Kronig transition probabilities is presented. It is our opinion that the different experimental approaches result in systematically different experimental data.

  20. A continuous optimization approach for inferring parameters in mathematical models of regulatory networks.

    PubMed

    Deng, Zhimin; Tian, Tianhai

    2014-07-29

    The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.

  1. A video multitracking system for quantification of individual behavior in a large fish shoal: advantages and limits.

    PubMed

    Delcourt, Johann; Becco, Christophe; Vandewalle, Nicolas; Poncin, Pascal

    2009-02-01

    The capability of a new multitracking system to track a large number of unmarked fish (up to 100) is evaluated. This system extrapolates a trajectory from each individual and analyzes recorded sequences that are several minutes long. This system is very efficient in statistical individual tracking, where the individual's identity is important for a short period of time in comparison with the duration of the track. Individual identification is typically greater than 99%. Identification is largely efficient (more than 99%) when the fish images do not cross the image of a neighbor fish. When the images of two fish merge (occlusion), we consider that the spot on the screen has a double identity. Consequently, there are no identification errors during occlusions, even though the measurement of the positions of each individual is imprecise. When the images of these two merged fish separate (separation), individual identification errors are more frequent, but their effect is very low in statistical individual tracking. On the other hand, in complete individual tracking, where individual fish identity is important for the entire trajectory, each identification error invalidates the results. In such cases, the experimenter must observe whether the program assigns the correct identification, and, when an error is made, must edit the results. This work is not too costly in time because it is limited to the separation events, accounting for fewer than 0.1% of individual identifications. Consequently, in both statistical and rigorous individual tracking, this system allows the experimenter to gain time by measuring the individual position automatically. It can also analyze the structural and dynamic properties of an animal group with a very large sample, with precision and sampling that are impossible to obtain with manual measures.

  2. Improving the Thermal, Radial and Temporal Accuracy of the Analytical Ultracentrifuge through External References

    PubMed Central

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying

    2013-01-01

    Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724

  3. Improving the thermal, radial, and temporal accuracy of the analytical ultracentrifuge through external references.

    PubMed

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying

    2013-09-01

    Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.

  4. Research on Measurement Accuracy of Laser Tracking System Based on Spherical Mirror with Rotation Errors of Gimbal Mount Axes

    NASA Astrophysics Data System (ADS)

    Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang

    2018-02-01

    This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.

  5. Chemical library subset selection algorithms: a unified derivation using spatial statistics.

    PubMed

    Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F

    2002-01-01

    If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.

  6. Correlation methods in optical metrology with state-of-the-art x-ray mirrors

    NASA Astrophysics Data System (ADS)

    Yashchuk, Valeriy V.; Centers, Gary; Gevorkyan, Gevork S.; Lacey, Ian; Smith, Brian V.

    2018-01-01

    The development of fully coherent free electron lasers and diffraction limited storage ring x-ray sources has brought to focus the need for higher performing x-ray optics with unprecedented tolerances for surface slope and height errors and roughness. For example, the proposed beamlines for the future upgraded Advance Light Source, ALS-U, require optical elements characterized by a residual slope error of <100 nrad (root-mean-square) and height error of <1-2 nm (peak-tovalley). These are for optics with a length of up to one meter. However, the current performance of x-ray optical fabrication and metrology generally falls short of these requirements. The major limitation comes from the lack of reliable and efficient surface metrology with required accuracy and with reasonably high measurement rate, suitable for integration into the modern deterministic surface figuring processes. The major problems of current surface metrology relate to the inherent instrumental temporal drifts, systematic errors, and/or an unacceptably high cost, as in the case of interferometry with computer-generated holograms as a reference. In this paper, we discuss the experimental methods and approaches based on correlation analysis to the acquisition and processing of metrology data developed at the ALS X-Ray Optical Laboratory (XROL). Using an example of surface topography measurements of a state-of-the-art x-ray mirror performed at the XROL, we demonstrate the efficiency of combining the developed experimental correlation methods to the advanced optimal scanning strategy (AOSS) technique. This allows a significant improvement in the accuracy and capacity of the measurements via suppression of the instrumental low frequency noise, temporal drift, and systematic error in a single measurement run. Practically speaking, implementation of the AOSS technique leads to an increase of the measurement accuracy, as well as the capacity of ex situ metrology by a factor of about four. The developed method is general and applicable to a broad spectrum of high accuracy measurements.

  7. On the dipole approximation with error estimates

    NASA Astrophysics Data System (ADS)

    Boßmann, Lea; Grummt, Robert; Kolb, Martin

    2018-01-01

    The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.

  8. Analysis of imperfections in the coherent optical excitation of single atoms to Rydberg states

    NASA Astrophysics Data System (ADS)

    de Léséleuc, Sylvain; Barredo, Daniel; Lienhard, Vincent; Browaeys, Antoine; Lahaye, Thierry

    2018-05-01

    We study experimentally various physical limitations and technical imperfections that lead to damping and finite contrast of optically driven Rabi oscillations between ground and Rydberg states of a single atom. Finite contrast is due to preparation and detection errors, and we show how to model and measure them accurately. Part of these errors originates from the finite lifetime of Rydberg states, and we observe its n3 scaling with the principal quantum number n . To explain the damping of Rabi oscillations, we use simple numerical models taking into account independently measured experimental imperfections and show that the observed damping actually results from the accumulation of several small effects, each at the level of a few percent. We discuss prospects for improving the coherence of ground-Rydberg Rabi oscillations in view of applications in quantum simulation and quantum information processing with arrays of single Rydberg atoms.

  9. Study on verifying the angle measurement performance of the rotary-laser system

    NASA Astrophysics Data System (ADS)

    Zhao, Jin; Ren, Yongjie; Lin, Jiarui; Yin, Shibin; Zhu, Jigui

    2018-04-01

    An angle verification method to verify the angle measurement performance of the rotary-laser system was developed. Angle measurement performance has a great impact on measuring accuracy. Although there is some previous research on the verification of angle measuring uncertainty for the rotary-laser system, there are still some limitations. High-precision reference angles are used in the study of the method, and an integrated verification platform is set up to evaluate the performance of the system. This paper also probes the error that has biggest influence on the verification system. Some errors of the verification system are avoided via the experimental method, and some are compensated through the computational formula and curve fitting. Experimental results show that the angle measurement performance meets the requirement for coordinate measurement. The verification platform can evaluate the uncertainty of angle measurement for the rotary-laser system efficiently.

  10. Free-space optical communications using orbital-angular-momentum multiplexing combined with MIMO-based spatial multiplexing.

    PubMed

    Ren, Yongxiong; Wang, Zhe; Xie, Guodong; Li, Long; Cao, Yinwen; Liu, Cong; Liao, Peicheng; Yan, Yan; Ahmed, Nisar; Zhao, Zhe; Willner, Asher; Ashrafi, Nima; Ashrafi, Solyman; Linquist, Roger D; Bock, Robert; Tur, Moshe; Molisch, Andreas F; Willner, Alan E

    2015-09-15

    We explore the potential of combining the advantages of multiple-input multiple-output (MIMO)-based spatial multiplexing with those of orbital angular momentum (OAM) multiplexing to increase the capacity of free-space optical (FSO) communications. We experimentally demonstrate an 80 Gbit/s FSO system with a 2×2 aperture architecture, in which each transmitter aperture contains two multiplexed data-carrying OAM modes. Inter-channel crosstalk effects are minimized by the OAM beams' inherent orthogonality and by the use of 4×4 MIMO signal processing. Our experimental results show that the bit-error rates can reach below the forward error correction limit of 3.8×10(-3) and the power penalties are less than 3.6 dB for all channels after MIMO processing. This indicates that OAM and MIMO-based spatial multiplexing could be simultaneously utilized, thereby providing the potential to enhance system performance.

  11. Constrained motion estimation-based error resilient coding for HEVC

    NASA Astrophysics Data System (ADS)

    Guo, Weihan; Zhang, Yongfei; Li, Bo

    2018-04-01

    Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.

  12. Effects of bathymetric lidar errors on flow properties predicted with a multi-dimensional hydraulic model

    Treesearch

    J. McKean; D. Tonina; C. Bohn; C. W. Wright

    2014-01-01

    New remote sensing technologies and improved computer performance now allow numerical flow modeling over large stream domains. However, there has been limited testing of whether channel topography can be remotely mapped with accuracy necessary for such modeling. We assessed the ability of the Experimental Advanced Airborne Research Lidar, to support a multi-dimensional...

  13. Integrated source and channel encoded digital communication system design study

    NASA Technical Reports Server (NTRS)

    Huth, G. K.; Trumpis, B. D.; Udalov, S.

    1975-01-01

    Various aspects of space shuttle communication systems were studied. The following major areas were investigated: burst error correction for shuttle command channels; performance optimization and design considerations for Costas receivers with and without bandpass limiting; experimental techniques for measuring low level spectral components of microwave signals; and potential modulation and coding techniques for the Ku-band return link. Results are presented.

  14. Generalized energy measurements and modified transient quantum fluctuation theorems

    NASA Astrophysics Data System (ADS)

    Watanabe, Gentaro; Venkatesh, B. Prasanna; Talkner, Peter

    2014-05-01

    Determining the work which is supplied to a system by an external agent provides a crucial step in any experimental realization of transient fluctuation relations. This, however, poses a problem for quantum systems, where the standard procedure requires the projective measurement of energy at the beginning and the end of the protocol. Unfortunately, projective measurements, which are preferable from the point of view of theory, seem to be difficult to implement experimentally. We demonstrate that, when using a particular type of generalized energy measurements, the resulting work statistics is simply related to that of projective measurements. This relation between the two work statistics entails the existence of modified transient fluctuation relations. The modifications are exclusively determined by the errors incurred in the generalized energy measurements. They are universal in the sense that they do not depend on the force protocol. Particularly simple expressions for the modified Crooks relation and Jarzynski equality are found for Gaussian energy measurements. These can be obtained by a sequence of sufficiently many generalized measurements which need not be Gaussian. In accordance with the central limit theorem, this leads to an effective error reduction in the individual measurements and even yields a projective measurement in the limit of infinite repetitions.

  15. A Probabilistic Palimpsest Model of Visual Short-term Memory

    PubMed Central

    Matthey, Loic; Bays, Paul M.; Dayan, Peter

    2015-01-01

    Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ. PMID:25611204

  16. A probabilistic palimpsest model of visual short-term memory.

    PubMed

    Matthey, Loic; Bays, Paul M; Dayan, Peter

    2015-01-01

    Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ.

  17. Experimental investigation of observation error in anuran call surveys

    USGS Publications Warehouse

    McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.

    2010-01-01

    Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.

  18. Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation

    PubMed Central

    Barbero, Sergio; Thibos, Larry N.

    2007-01-01

    Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302

  19. Peripheral refractive correction and automated perimetric profiles.

    PubMed

    Wild, J M; Wood, J M; Crews, S J

    1988-06-01

    The effect of peripheral refractive error correction on the automated perimetric sensitivity profile was investigated on a sample of 10 clinically normal, experienced observers. Peripheral refractive error was determined at eccentricities of 0 degree, 20 degrees and 40 degrees along the temporal meridian of the right eye using the Canon Autoref R-1, an infra-red automated refractor, under the parametric conditions of the Octopus automated perimeter. Perimetric sensitivity was then undertaken at these eccentricities (stimulus sizes 0 and III) with and without the appropriate peripheral refractive correction using the Octopus 201 automated perimeter. Within the measurement limits of the experimental procedures employed, perimetric sensitivity was not influenced by peripheral refractive correction.

  20. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.

    PubMed

    Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2017-06-30

    Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.

  1. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do

    PubMed Central

    2017-01-01

    Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113

  2. Solving the electron and electron-nuclear Schroedinger equations for the excited states of helium atom with the free iterative-complement-interaction method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroyuki; Hijikata, Yuh; Nakatsuji, Hiroshi

    2008-04-21

    Very accurate variational calculations with the free iterative-complement-interaction (ICI) method for solving the Schroedinger equation were performed for the 1sNs singlet and triplet excited states of helium atom up to N=24. This is the first extensive applications of the free ICI method to the calculations of excited states to very high levels. We performed the calculations with the fixed-nucleus Hamiltonian and moving-nucleus Hamiltonian. The latter case is the Schroedinger equation for the electron-nuclear Hamiltonian and includes the quantum effect of nuclear motion. This solution corresponds to the nonrelativistic limit and reproduced the experimental values up to five decimal figures. Themore » small differences from the experimental values are not at all the theoretical errors but represent the physical effects that are not included in the present calculations, such as relativistic effect, quantum electrodynamic effect, and even the experimental errors. The present calculations constitute a small step toward the accurately predictive quantum chemistry.« less

  3. Limits of quantitation - Yet another suggestion

    NASA Astrophysics Data System (ADS)

    Carlson, Jill; Wysoczanski, Artur; Voigtman, Edward

    2014-06-01

    The work presented herein suggests that the limit of quantitation concept may be rendered substantially less ambiguous and ultimately more useful as a figure of merit by basing it upon the significant figure and relative measurement error ideas due to Coleman, Auses and Gram, coupled with the correct instantiation of Currie's detection limit methodology. Simple theoretical results are presented for a linear, univariate chemical measurement system with homoscedastic Gaussian noise, and these are tested against both Monte Carlo computer simulations and laser-excited molecular fluorescence experimental results. Good agreement among experiment, theory and simulation is obtained and an easy extension to linearly heteroscedastic Gaussian noise is also outlined.

  4. A Framework for Image-Based Modeling of Acute Myocardial Ischemia Using Intramurally Recorded Extracellular Potentials.

    PubMed

    Burton, Brett M; Aras, Kedar K; Good, Wilson W; Tate, Jess D; Zenger, Brian; MacLeod, Rob S

    2018-05-21

    The biophysical basis for electrocardiographic evaluation of myocardial ischemia stems from the notion that ischemic tissues develop, with relative uniformity, along the endocardial aspects of the heart. These injured regions of subendocardial tissue give rise to intramural currents that lead to ST segment deflections within electrocardiogram (ECG) recordings. The concept of subendocardial ischemic regions is often used in clinical practice, providing a simple and intuitive description of ischemic injury; however, such a model grossly oversimplifies the presentation of ischemic disease-inadvertently leading to errors in ECG-based diagnoses. Furthermore, recent experimental studies have brought into question the subendocardial ischemia paradigm suggesting instead a more distributed pattern of tissue injury. These findings come from experiments and so have both the impact and the limitations of measurements from living organisms. Computer models have often been employed to overcome the constraints of experimental approaches and have a robust history in cardiac simulation. To this end, we have developed a computational simulation framework aimed at elucidating the effects of ischemia on measurable cardiac potentials. To validate our framework, we simulated, visualized, and analyzed 226 experimentally derived acute myocardial ischemic events. Simulation outcomes agreed both qualitatively (feature comparison) and quantitatively (correlation, average error, and significance) with experimentally obtained epicardial measurements, particularly under conditions of elevated ischemic stress. Our simulation framework introduces a novel approach to incorporating subject-specific, geometric models and experimental results that are highly resolved in space and time into computational models. We propose this framework as a means to advance the understanding of the underlying mechanisms of ischemic disease while simultaneously putting in place the computational infrastructure necessary to study and improve ischemia models aimed at reducing diagnostic errors in the clinic.

  5. Robust video super-resolution with registration efficiency adaptation

    NASA Astrophysics Data System (ADS)

    Zhang, Xinfeng; Xiong, Ruiqin; Ma, Siwei; Zhang, Li; Gao, Wen

    2010-07-01

    Super-Resolution (SR) is a technique to construct a high-resolution (HR) frame by fusing a group of low-resolution (LR) frames describing the same scene. The effectiveness of the conventional super-resolution techniques, when applied on video sequences, strongly relies on the efficiency of motion alignment achieved by image registration. Unfortunately, such efficiency is limited by the motion complexity in the video and the capability of adopted motion model. In image regions with severe registration errors, annoying artifacts usually appear in the produced super-resolution video. This paper proposes a robust video super-resolution technique that adapts itself to the spatially-varying registration efficiency. The reliability of each reference pixel is measured by the corresponding registration error and incorporated into the optimization objective function of SR reconstruction. This makes the SR reconstruction highly immune to the registration errors, as outliers with higher registration errors are assigned lower weights in the objective function. In particular, we carefully design a mechanism to assign weights according to registration errors. The proposed superresolution scheme has been tested with various video sequences and experimental results clearly demonstrate the effectiveness of the proposed method.

  6. Effects of true density, compacted mass, compression speed, and punch deformation on the mean yield pressure.

    PubMed

    Gabaude, C M; Guillot, M; Gautier, J C; Saudemon, P; Chulia, D

    1999-07-01

    Compressibility properties of pharmaceutical materials are widely characterized by measuring the volume reduction of a powder column under pressure. Experimental data are commonly analyzed using the Heckel model from which powder deformation mechanisms are determined using mean yield pressure (Py). Several studies from the literature have shown the effects of operating conditions on the determination of Py and have pointed out the limitations of this model. The Heckel model requires true density and compacted mass values to determine Py from force-displacement data. It is likely that experimental errors will be introduced when measuring the true density and compacted mass. This study investigates the effects of true density and compacted mass on Py. Materials having different particle deformation mechanisms are studied. Punch displacement and applied pressure are measured for each material at two compression speeds. For each material, three different true density and compacted mass values are utilized to evaluate their effect on Py. The calculated variation of Py reaches 20%. This study demonstrates that the errors in measuring true density and compacted mass have a greater effect on Py than the errors incurred from not correcting the displacement measurements due to punch elasticity.

  7. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  8. Application of a Laplace transform pair model for high-energy x-ray spectral reconstruction.

    PubMed

    Archer, B R; Almond, P R; Wagner, L K

    1985-01-01

    A Laplace transform pair model, previously shown to accurately reconstruct x-ray spectra at diagnostic energies, has been applied to megavoltage energy beams. The inverse Laplace transforms of 2-, 6-, and 25-MV attenuation curves were evaluated to determine the energy spectra of these beams. The 2-MV data indicate that the model can reliably reconstruct spectra in the low megavoltage range. Experimental limitations in acquiring the 6-MV transmission data demonstrate the sensitivity of the model to systematic experimental error. The 25-MV data result in a physically realistic approximation of the present spectrum.

  9. Evaluation of probe-induced flow distortion of Campbell CSAT3 sonic anemometers by numerical simulation

    NASA Astrophysics Data System (ADS)

    Mauder, M.; Huq, S.; De Roo, F.; Foken, T.; Manhart, M.; Schmid, H. P. E.

    2017-12-01

    The Campbell CSAT3 sonic anemometer is one of the most widely used instruments for eddy-covariance measurement. However, conflicting estimates for the probe-induced flow distortion error of this instrument have been reported recently, and those error estimates range between 3% and 14% for the measurement of vertical velocity fluctuations. This large discrepancy between the different studies can probably be attributed to the different experimental approaches applied. In order to overcome the limitations of both field intercomparison experiments and wind tunnel experiments, we propose a new approach that relies on virtual measurements in a large-eddy simulation (LES) environment. In our experimental set-up, we generate horizontal and vertical velocity fluctuations at frequencies that typically dominate the turbulence spectra of the surface layer. The probe-induced flow distortion error of a CSAT3 is then quantified by this numerical wind tunnel approach while the statistics of the prescribed inflow signal are taken as reference or etalon. The resulting relative error is found to range from 3% to 7% and from 1% to 3% for the standard deviation of the vertical and the horizontal velocity component, respectively, depending on the orientation of the CSAT3 in the flow field. We further demonstrate that these errors are independent of the frequency of fluctuations at the inflow of the simulation. The analytical corrections proposed by Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol, 155, 371-395, 2015) are compared against our simulated results, and we find that they indeed reduce the error by up to three percentage points. However, these corrections fail to reproduce the azimuth-dependence of the error that we observe. Moreover, we investigate the general Reynolds number dependence of the flow distortion error by more detailed idealized simulations.

  10. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  11. IPET and FETR: Experimental Approach for Studying Molecular Structure Dynamics by Cryo-Electron Tomography of a Single-Molecule Structure

    PubMed Central

    Zhang, Lei; Ren, Gang

    2012-01-01

    The dynamic personalities and structural heterogeneity of proteins are essential for proper functioning. Structural determination of dynamic/heterogeneous proteins is limited by conventional approaches of X-ray and electron microscopy (EM) of single-particle reconstruction that require an average from thousands to millions different molecules. Cryo-electron tomography (cryoET) is an approach to determine three-dimensional (3D) reconstruction of a single and unique biological object such as bacteria and cells, by imaging the object from a series of tilting angles. However, cconventional reconstruction methods use large-size whole-micrographs that are limited by reconstruction resolution (lower than 20 Å), especially for small and low-symmetric molecule (<400 kDa). In this study, we demonstrated the adverse effects from image distortion and the measuring tilt-errors (including tilt-axis and tilt-angle errors) both play a major role in limiting the reconstruction resolution. Therefore, we developed a “focused electron tomography reconstruction” (FETR) algorithm to improve the resolution by decreasing the reconstructing image size so that it contains only a single-instance protein. FETR can tolerate certain levels of image-distortion and measuring tilt-errors, and can also precisely determine the translational parameters via an iterative refinement process that contains a series of automatically generated dynamic filters and masks. To describe this method, a set of simulated cryoET images was employed; to validate this approach, the real experimental images from negative-staining and cryoET were used. Since this approach can obtain the structure of a single-instance molecule/particle, we named it individual-particle electron tomography (IPET) as a new robust strategy/approach that does not require a pre-given initial model, class averaging of multiple molecules or an extended ordered lattice, but can tolerate small tilt-errors for high-resolution single “snapshot” molecule structure determination. Thus, FETR/IPET provides a completely new opportunity for a single-molecule structure determination, and could be used to study the dynamic character and equilibrium fluctuation of macromolecules. PMID:22291925

  12. DNA Barcoding through Quaternary LDPC Codes

    PubMed Central

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348

  13. DNA Barcoding through Quaternary LDPC Codes.

    PubMed

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  14. Ring lens focusing and push-pull tracking scheme for optical disk systems

    NASA Technical Reports Server (NTRS)

    Gerber, R.; Zambuto, J.; Erwin, J. K.; Mansuripur, M.

    1993-01-01

    An experimental comparison of the ring lens and the astigmatic techniques of generating focus-error-signal (FES) in optical disk systems reveals that the ring lens generates a FES over two times steeper than that produced by the astigmat. Partly due to this large slope and, in part, because of its diffraction-limited behavior, the ring lens scheme exhibits superior performance characteristics. In particular the undesirable signal known as 'feedthrough' (induced on the FES by track-crossings during the seek operation) is lower by a factor of six compared to that observed with the astigmatic method. The ring lens is easy to align and has reasonable tolerance for positioning errors.

  15. Correction for specimen movement and rotation errors for in-vivo Optical Projection Tomography

    PubMed Central

    Birk, Udo Jochen; Rieckher, Matthias; Konstantinides, Nikos; Darrell, Alex; Sarasa-Renedo, Ana; Meyer, Heiko; Tavernarakis, Nektarios; Ripoll, Jorge

    2010-01-01

    The application of optical projection tomography to in-vivo experiments is limited by specimen movement during the acquisition. We present a set of mathematical correction methods applied to the acquired data stacks to correct for movement in both directions of the image plane. These methods have been applied to correct experimental data taken from in-vivo optical projection tomography experiments in Caenorhabditis elegans. Successful reconstructions for both fluorescence and white light (absorption) measurements are shown. Since no difference between movement of the animal and movement of the rotation axis is made, this approach at the same time removes artifacts due to mechanical drifts and errors in the assumed center of rotation. PMID:21258448

  16. Human-robot cooperative movement training: learning a novel sensory motor transformation during walking with robotic assistance-as-needed.

    PubMed

    Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J

    2007-03-28

    A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury.

  17. $$|V_{ub}|$$ from $$B\\to\\pi\\ell\

    DOE PAGES

    Bailey, Jon A.; et al.

    2015-07-23

    We present a lattice-QCD calculation of the B → πℓν semileptonic form factors and a new determination of the CKM matrix element |V ub|. We use the MILC asqtad (2+1)-flavor lattice configurations at four lattice spacings and light-quark masses down to 1/20 of the physical strange-quark mass. We extrapolate the lattice form factors to the continuum using staggered chiral perturbation theory in the hard-pion and SU(2) limits. We employ a model-independent z parametrization to extrapolate our lattice form factors from large-recoil momentum to the full kinematic range. We introduce a new functional method to propagate information from the chiral-continuum extrapolationmore » to the z expansion. We present our results together with a complete systematic error budget, including a covariance matrix to enable the combination of our form factors with other lattice-QCD and experimental results. To obtain |V ub|, we simultaneously fit the experimental data for the B → πℓν differential decay rate obtained by the BABAR and Belle collaborations together with our lattice form-factor results. We find |V ub|=(3.72±0.16) × 10 –3, where the error is from the combined fit to lattice plus experiments and includes all sources of uncertainty. Our form-factor results bring the QCD error on |V ub| to the same level as the experimental error. We also provide results for the B → πℓν vector and scalar form factors obtained from the combined lattice and experiment fit, which are more precisely determined than from our lattice-QCD calculation alone. Lastly, these results can be used in other phenomenological applications and to test other approaches to QCD.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Jon A.; et al.

    We present a lattice-QCD calculation of the B → πℓν semileptonic form factors and a new determination of the CKM matrix element |V ub|. We use the MILC asqtad (2+1)-flavor lattice configurations at four lattice spacings and light-quark masses down to 1/20 of the physical strange-quark mass. We extrapolate the lattice form factors to the continuum using staggered chiral perturbation theory in the hard-pion and SU(2) limits. We employ a model-independent z parametrization to extrapolate our lattice form factors from large-recoil momentum to the full kinematic range. We introduce a new functional method to propagate information from the chiral-continuum extrapolationmore » to the z expansion. We present our results together with a complete systematic error budget, including a covariance matrix to enable the combination of our form factors with other lattice-QCD and experimental results. To obtain |V ub|, we simultaneously fit the experimental data for the B → πℓν differential decay rate obtained by the BABAR and Belle collaborations together with our lattice form-factor results. We find |V ub|=(3.72±0.16) × 10 –3, where the error is from the combined fit to lattice plus experiments and includes all sources of uncertainty. Our form-factor results bring the QCD error on |V ub| to the same level as the experimental error. We also provide results for the B → πℓν vector and scalar form factors obtained from the combined lattice and experiment fit, which are more precisely determined than from our lattice-QCD calculation alone. Lastly, these results can be used in other phenomenological applications and to test other approaches to QCD.« less

  19. Vibration characteristics of teak wood filled steel tubes

    NASA Astrophysics Data System (ADS)

    Danawade, Bharatesh Adappa; Malagi, Ravindra Rachappa

    2018-05-01

    The objective of this paper is to determine fundamental frequency and damping ratio of teak wood filled steel tubes. Mechanically bonded teak wood filled steel tubes have been evaluated by experimental impact hammer test using modal analysis. The results of impact hammer test were verified and validated by finite element tool ANSYS using harmonic analysis. The error between the two methods was observed to be within acceptable limit.

  20. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  1. Non-linear dynamic compensation system

    NASA Technical Reports Server (NTRS)

    Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)

    1992-01-01

    A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.

  2. Existing methods for improving the accuracy of digital-to-analog converters

    NASA Astrophysics Data System (ADS)

    Eielsen, Arnfinn A.; Fleming, Andrew J.

    2017-09-01

    The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.

  3. Improved tests of extra-dimensional physics and thermal quantum field theory from new Casimir force measurements

    NASA Astrophysics Data System (ADS)

    Decca, R. S.; Fischbach, E.; Klimchitskaya, G. L.; Krause, D. E.; López, D.; Mostepanenko, V. M.

    2003-12-01

    We report new constraints on extra-dimensional models and other physics beyond the standard model based on measurements of the Casimir force between two dissimilar metals for separations in the range 0.2 1.2 μm. The Casimir force between a Au-coated sphere and a Cu-coated plate of a microelectromechanical torsional oscillator was measured statically with an absolute error of 0.3 pN. In addition, the Casimir pressure between two parallel plates was determined dynamically with an absolute error of ≈0.6 mPa. Within the limits of experimental and theoretical errors, the results are in agreement with a theory that takes into account the finite conductivity and roughness of the two metals. The level of agreement between experiment and theory was then used to set limits on the predictions of extra-dimensional physics and thermal quantum field theory. It is shown that two theoretical approaches to the thermal Casimir force which predict effects linear in temperature are ruled out by these experiments. Finally, constraints on Yukawa corrections to Newton’s law of gravity are strengthened by more than an order of magnitude in the range 56 330 nm.

  4. An Effective Terrain Aided Navigation for Low-Cost Autonomous Underwater Vehicles.

    PubMed

    Zhou, Ling; Cheng, Xianghong; Zhu, Yixian; Dai, Chenxi; Fu, Jinbo

    2017-03-25

    Terrain-aided navigation is a potentially powerful solution for obtaining submerged position fixes for autonomous underwater vehicles. The application of terrain-aided navigation with high-accuracy inertial navigation systems has demonstrated meter-level navigation accuracy in sea trials. However, available sensors may be limited depending on the type of the mission. Such limitations, especially for low-grade navigation sensors, not only degrade the accuracy of traditional navigation systems, but further impact the ability to successfully employ terrain-aided navigation. To address this problem, a tightly-coupled navigation is presented to successfully estimate the critical sensor errors by incorporating raw sensor data directly into an augmented navigation system. Furthermore, three-dimensional distance errors are calculated, providing measurement updates through the particle filter for absolute and bounded position error. The development of the terrain aided navigation system is elaborated for a vehicle equipped with a non-inertial-grade strapdown inertial navigation system, a 4-beam Doppler Velocity Log range sensor and a sonar altimeter. Using experimental data for navigation performance evaluation in areas with different terrain characteristics, the experiment results further show that the proposed method can be successfully applied to the low-cost AUVs and significantly improves navigation performance.

  5. An Effective Terrain Aided Navigation for Low-Cost Autonomous Underwater Vehicles

    PubMed Central

    Zhou, Ling; Cheng, Xianghong; Zhu, Yixian; Dai, Chenxi; Fu, Jinbo

    2017-01-01

    Terrain-aided navigation is a potentially powerful solution for obtaining submerged position fixes for autonomous underwater vehicles. The application of terrain-aided navigation with high-accuracy inertial navigation systems has demonstrated meter-level navigation accuracy in sea trials. However, available sensors may be limited depending on the type of the mission. Such limitations, especially for low-grade navigation sensors, not only degrade the accuracy of traditional navigation systems, but further impact the ability to successfully employ terrain-aided navigation. To address this problem, a tightly-coupled navigation is presented to successfully estimate the critical sensor errors by incorporating raw sensor data directly into an augmented navigation system. Furthermore, three-dimensional distance errors are calculated, providing measurement updates through the particle filter for absolute and bounded position error. The development of the terrain aided navigation system is elaborated for a vehicle equipped with a non-inertial-grade strapdown inertial navigation system, a 4-beam Doppler Velocity Log range sensor and a sonar altimeter. Using experimental data for navigation performance evaluation in areas with different terrain characteristics, the experiment results further show that the proposed method can be successfully applied to the low-cost AUVs and significantly improves navigation performance. PMID:28346346

  6. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    DOE PAGES

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; ...

    2018-02-12

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. Here, we use a superconducting-qubit-based processor to apply the QSE approach to the H 2 molecule, extracting both groundmore » and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.« less

  7. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    NASA Astrophysics Data System (ADS)

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; Blok, M. S.; Kimchi-Schwartz, M. E.; McClean, J. R.; Carter, J.; de Jong, W. A.; Siddiqi, I.

    2018-02-01

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H2 molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.

  8. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  9. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. Here, we use a superconducting-qubit-based processor to apply the QSE approach to the H 2 molecule, extracting both groundmore » and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.« less

  10. The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking.

    PubMed

    Norman, Geoffrey R; Monteiro, Sandra D; Sherbino, Jonathan; Ilgen, Jonathan S; Schmidt, Henk G; Mamede, Silvia

    2017-01-01

    Contemporary theories of clinical reasoning espouse a dual processing model, which consists of a rapid, intuitive component (Type 1) and a slower, logical and analytical component (Type 2). Although the general consensus is that this dual processing model is a valid representation of clinical reasoning, the causes of diagnostic errors remain unclear. Cognitive theories about human memory propose that such errors may arise from both Type 1 and Type 2 reasoning. Errors in Type 1 reasoning may be a consequence of the associative nature of memory, which can lead to cognitive biases. However, the literature indicates that, with increasing expertise (and knowledge), the likelihood of errors decreases. Errors in Type 2 reasoning may result from the limited capacity of working memory, which constrains computational processes. In this article, the authors review the medical literature to answer two substantial questions that arise from this work: (1) To what extent do diagnostic errors originate in Type 1 (intuitive) processes versus in Type 2 (analytical) processes? (2) To what extent are errors a consequence of cognitive biases versus a consequence of knowledge deficits?The literature suggests that both Type 1 and Type 2 processes contribute to errors. Although it is possible to experimentally induce cognitive biases, particularly availability bias, the extent to which these biases actually contribute to diagnostic errors is not well established. Educational strategies directed at the recognition of biases are ineffective in reducing errors; conversely, strategies focused on the reorganization of knowledge to reduce errors have small but consistent benefits.

  11. Performance improvement of a binary quantized all-digital phase-locked loop with a new aided-acquisition technique

    NASA Astrophysics Data System (ADS)

    Sandoz, J.-P.; Steenaart, W.

    1984-12-01

    The nonuniform sampling digital phase-locked loop (DPLL) with sequential loop filter, in which the correction sizes are controlled by the accumulated differences of two additional phase comparators, is graphically analyzed. In the absence of noise and frequency drift, the analysis gives some physical insight into the acquisition and tracking behavior. Taking noise into account, a mathematical model is derived and a random walk technique is applied to evaluate the rms phase error and the mean acquisition time. Experimental results confirm the appropriate simplifying hypotheses used in the numerical analysis. Two related performance measures defined in terms of the rms phase error and the acquisition time for a given SNR are used. These measures provide a common basis for comparing different digital loops and, to a limited extent, also with a first-order linear loop. Finally, the behavior of a modified DPLL under frequency deviation in the presence of Gaussian noise is tested experimentally and by computer simulation.

  12. Phonons in two-dimensional soft colloidal crystals.

    PubMed

    Chen, Ke; Still, Tim; Schoenholz, Samuel; Aptowicz, Kevin B; Schindler, Michael; Maggs, A C; Liu, Andrea J; Yodh, A G

    2013-08-01

    The vibrational modes of pristine and polycrystalline monolayer colloidal crystals composed of thermosensitive microgel particles are measured using video microscopy and covariance matrix analysis. At low frequencies, the Debye relation for two-dimensional harmonic crystals is observed in both crystal types; at higher frequencies, evidence for van Hove singularities in the phonon density of states is significantly smeared out by experimental noise and measurement statistics. The effects of these errors are analyzed using numerical simulations. We introduce methods to correct for these limitations, which can be applied to disordered systems as well as crystalline ones, and we show that application of the error correction procedure to the experimental data leads to more pronounced van Hove singularities in the pristine crystal. Finally, quasilocalized low-frequency modes in polycrystalline two-dimensional colloidal crystals are identified and demonstrated to correlate with structural defects such as dislocations, suggesting that quasilocalized low-frequency phonon modes may be used to identify local regions vulnerable to rearrangements in crystalline as well as amorphous solids.

  13. Quantum Steering Inequality with Tolerance for Measurement-Setting Errors: Experimentally Feasible Signature of Unbounded Violation

    NASA Astrophysics Data System (ADS)

    Rutkowski, Adam; Buraczewski, Adam; Horodecki, Paweł; Stobińska, Magdalena

    2017-01-01

    Quantum steering is a relatively simple test for proving that the values of quantum-mechanical measurement outcomes come into being only in the act of measurement. By exploiting quantum correlations, Alice can influence—steer—Bob's physical system in a way that is impossible in classical mechanics, as shown by the violation of steering inequalities. Demonstrating this and similar quantum effects for systems of increasing size, approaching even the classical limit, is a long-standing challenging problem. Here, we prove an experimentally feasible unbounded violation of a steering inequality. We derive its universal form where tolerance for measurement-setting errors is explicitly built in by means of the Deutsch-Maassen-Uffink entropic uncertainty relation. Then, generalizing the mutual unbiasedness, we apply the inequality to the multisinglet and multiparticle bipartite Bell state. However, the method is general and opens the possibility of employing multiparticle bipartite steering for randomness certification and development of quantum technologies, e.g., random access codes.

  14. Computing and analyzing the sensitivity of MLP due to the errors of the i.i.d. inputs and weights based on CLT.

    PubMed

    Yang, Sheng-Sung; Ho, Chia-Lu; Siu, Sammy

    2010-12-01

    In this paper, we propose an algorithm based on the central limit theorem to compute the sensitivity of the multilayer perceptron (MLP) due to the errors of the inputs and weights. For simplicity and practicality, all inputs and weights studied here are independently identically distributed (i.i.d.). The theoretical results derived from the proposed algorithm show that the sensitivity of the MLP is affected by the number of layers and the number of neurons adopted in each layer. To prove the reliability of the proposed algorithm, some experimental results of the sensitivity are also presented, and they match the theoretical ones. The good agreement between the theoretical results and the experimental results verifies the reliability and feasibility of the proposed algorithm. Furthermore, the proposed algorithm can also be applied to compute precisely the sensitivity of the MLP with any available activation functions and any types of i.i.d. inputs and weights.

  15. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  16. Limits on estimating the width of thin tubular structures in 3D images.

    PubMed

    Wörz, Stefan; Rohr, Karl

    2006-01-01

    This work studies limits on estimating the width of thin tubular structures in 3D images. Based on nonlinear estimation theory we analyze the minimal stochastic error of estimating the width. Given a 3D analytic model of the image intensities of tubular structures, we derive a closed-form expression for the Cramér-Rao bound of the width estimate under image noise. We use the derived lower bound as a benchmark and compare it with three previously proposed accuracy limits for vessel width estimation. Moreover, by experimental investigations we demonstrate that the derived lower bound can be achieved by fitting a 3D parametric intensity model directly to the image data.

  17. How scientific experiments are designed: Problem solving in a knowledge-rich, error-rich environment

    NASA Astrophysics Data System (ADS)

    Baker, Lisa M.

    While theory formation and the relation between theory and data has been investigated in many studies of scientific reasoning, researchers have focused less attention on reasoning about experimental design, even though the experimental design process makes up a large part of real-world scientists' reasoning. The goal of this thesis was to provide a cognitive account of the scientific experimental design process by analyzing experimental design as problem-solving behavior (Newell & Simon, 1972). Three specific issues were addressed: the effect of potential error on experimental design strategies, the role of prior knowledge in experimental design, and the effect of characteristics of the space of alternate hypotheses on alternate hypothesis testing. A two-pronged in vivo/in vitro research methodology was employed, in which transcripts of real-world scientific laboratory meetings were analyzed as well as undergraduate science and non-science majors' design of biology experiments in the psychology laboratory. It was found that scientists use a specific strategy to deal with the possibility of error in experimental findings: they include "known" control conditions in their experimental designs both to determine whether error is occurring and to identify sources of error. The known controls strategy had not been reported in earlier studies with science-like tasks, in which participants' responses to error had consisted of replicating experiments and discounting results. With respect to prior knowledge: scientists and undergraduate students drew on several types of knowledge when designing experiments, including theoretical knowledge, domain-specific knowledge of experimental techniques, and domain-general knowledge of experimental design strategies. Finally, undergraduate science students generated and tested alternates to their favored hypotheses when the space of alternate hypotheses was constrained and searchable. This result may help explain findings of confirmation bias in earlier studies using science-like tasks, in which characteristics of the alternate hypothesis space may have made it unfeasible for participants to generate and test alternate hypotheses. In general, scientists and science undergraduates were found to engage in a systematic experimental design process that responded to salient features of the problem environment, including the constant potential for experimental error, availability of alternate hypotheses, and access to both theoretical knowledge and knowledge of experimental techniques.

  18. Combining experimental and simulation data of molecular processes via augmented Markov models.

    PubMed

    Olsson, Simon; Wu, Hao; Paul, Fabian; Clementi, Cecilia; Noé, Frank

    2017-08-01

    Accurate mechanistic description of structural changes in biomolecules is an increasingly important topic in structural and chemical biology. Markov models have emerged as a powerful way to approximate the molecular kinetics of large biomolecules while keeping full structural resolution in a divide-and-conquer fashion. However, the accuracy of these models is limited by that of the force fields used to generate the underlying molecular dynamics (MD) simulation data. Whereas the quality of classical MD force fields has improved significantly in recent years, remaining errors in the Boltzmann weights are still on the order of a few [Formula: see text], which may lead to significant discrepancies when comparing to experimentally measured rates or state populations. Here we take the view that simulations using a sufficiently good force-field sample conformations that are valid but have inaccurate weights, yet these weights may be made accurate by incorporating experimental data a posteriori. To do so, we propose augmented Markov models (AMMs), an approach that combines concepts from probability theory and information theory to consistently treat systematic force-field error and statistical errors in simulation and experiment. Our results demonstrate that AMMs can reconcile conflicting results for protein mechanisms obtained by different force fields and correct for a wide range of stationary and dynamical observables even when only equilibrium measurements are incorporated into the estimation process. This approach constitutes a unique avenue to combine experiment and computation into integrative models of biomolecular structure and dynamics.

  19. Mathematical Model and Calibration Experiment of a Large Measurement Range Flexible Joints 6-UPUR Six-Axis Force Sensor

    PubMed Central

    Zhao, Yanzhi; Zhang, Caifeng; Zhang, Dan; Shi, Zhongpan; Zhao, Tieshi

    2016-01-01

    Nowadays improving the accuracy and enlarging the measuring range of six-axis force sensors for wider applications in aircraft landing, rocket thrust, and spacecraft docking testing experiments has become an urgent objective. However, it is still difficult to achieve high accuracy and large measuring range with traditional parallel six-axis force sensors due to the influence of the gap and friction of the joints. Therefore, to overcome the mentioned limitations, this paper proposed a 6-Universal-Prismatic-Universal-Revolute (UPUR) joints parallel mechanism with flexible joints to develop a large measurement range six-axis force sensor. The structural characteristics of the sensor are analyzed in comparison with traditional parallel sensor based on the Stewart platform. The force transfer relation of the sensor is deduced, and the force Jacobian matrix is obtained using screw theory in two cases of the ideal state and the state of flexibility of each flexible joint is considered. The prototype and loading calibration system are designed and developed. The K value method and least squares method are used to process experimental data, and in errors of kind Ι and kind II linearity are obtained. The experimental results show that the calibration error of the K value method is more than 13.4%, and the calibration error of the least squares method is 2.67%. The experimental results prove the feasibility of the sensor and the correctness of the theoretical analysis which are expected to be adopted in practical applications. PMID:27529244

  20. Estimation of fast and slow wave properties in cancellous bone using Prony's method and curve fitting.

    PubMed

    Wear, Keith A

    2013-04-01

    The presence of two longitudinal waves in poroelastic media is predicted by Biot's theory and has been confirmed experimentally in through-transmission measurements in cancellous bone. Estimation of attenuation coefficients and velocities of the two waves is challenging when the two waves overlap in time. The modified least squares Prony's (MLSP) method in conjuction with curve-fitting (MLSP + CF) is tested using simulations based on published values for fast and slow wave attenuation coefficients and velocities in cancellous bone from several studies in bovine femur, human femur, and human calcaneus. The search algorithm is accelerated by exploiting correlations among search parameters. The performance of the algorithm is evaluated as a function of signal-to-noise ratio (SNR). For a typical experimental SNR (40 dB), the root-mean-square errors (RMSEs) for one example (human femur) with fast and slow waves separated by approximately half of a pulse duration were 1 m/s (slow wave velocity), 4 m/s (fast wave velocity), 0.4 dB/cm MHz (slow wave attenuation slope), and 1.7 dB/cm MHz (fast wave attenuation slope). The MLSP + CF method is fast (requiring less than 2 s at SNR = 40 dB on a consumer-grade notebook computer) and is flexible with respect to the functional form of the parametric model for the transmission coefficient. The MLSP + CF method provides sufficient accuracy and precision for many applications such that experimental error is a greater limiting factor than estimation error.

  1. Thermodynamic Basis for the Emergence of Genomes during Prebiotic Evolution

    PubMed Central

    Woo, Hyung-June; Vijaya Satya, Ravi; Reifman, Jaques

    2012-01-01

    The RNA world hypothesis views modern organisms as descendants of RNA molecules. The earliest RNA molecules must have been random sequences, from which the first genomes that coded for polymerase ribozymes emerged. The quasispecies theory by Eigen predicts the existence of an error threshold limiting genomic stability during such transitions, but does not address the spontaneity of changes. Following a recent theoretical approach, we applied the quasispecies theory combined with kinetic/thermodynamic descriptions of RNA replication to analyze the collective behavior of RNA replicators based on known experimental kinetics data. We find that, with increasing fidelity (relative rate of base-extension for Watson-Crick versus mismatched base pairs), replications without enzymes, with ribozymes, and with protein-based polymerases are above, near, and below a critical point, respectively. The prebiotic evolution therefore must have crossed this critical region. Over large regions of the phase diagram, fitness increases with increasing fidelity, biasing random drifts in sequence space toward ‘crystallization.’ This region encloses the experimental nonenzymatic fidelity value, favoring evolutions toward polymerase sequences with ever higher fidelity, despite error rates above the error catastrophe threshold. Our work shows that experimentally characterized kinetics and thermodynamics of RNA replication allow us to determine the physicochemical conditions required for the spontaneous crystallization of biological information. Our findings also suggest that among many potential oligomers capable of templated replication, RNAs may have evolved to form prebiotic genomes due to the value of their nonenzymatic fidelity. PMID:22693440

  2. The Frame Constraint on Experimentally Elicited Speech Errors in Japanese.

    PubMed

    Saito, Akie; Inoue, Tomoyoshi

    2017-06-01

    The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the same position within a syllable or word. Most of the evidence for the effect comes from analyses of naturally occurring speech errors in Indo-European languages, and there are few studies examining the effect in experimentally elicited speech errors and in other languages. This study examined whether experimentally elicited sound errors in Japanese exhibits the syllable position effect. In Japanese, the sub-syllabic unit known as "mora" is considered to be a basic sound unit in production. Results showed that the syllable position effect occurred in mora errors, suggesting that the frame constrains the ordering of sounds during speech production.

  3. Space charge enhanced plasma gradient effects on satellite electric field measurements

    NASA Technical Reports Server (NTRS)

    Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.

    1991-01-01

    It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.

  4. Testing of the ABBN-RF multigroup data library in photon transport calculations

    NASA Astrophysics Data System (ADS)

    Koscheev, Vladimir; Lomakov, Gleb; Manturov, Gennady; Tsiboulia, Anatoly

    2017-09-01

    Gamma radiation is produced via both of nuclear fuel and shield materials. Photon interaction is known with appropriate accuracy, but secondary gamma ray production known much less. The purpose of this work is studying secondary gamma ray production data from neutron induced reactions in iron and lead by using MCNP code and modern nuclear data as ROSFOND, ENDF/B-7.1, JEFF-3.2 and JENDL-4.0. Results of calculations show that all of these nuclear data have different photon production data from neutron induced reactions and have poor agreement with evaluated benchmark experiment. The ABBN-RF multigroup cross-section library is based on the ROSFOND data. It presented in two forms of micro cross sections: ABBN and MATXS formats. Comparison of group-wise calculations using both ABBN and MATXS data to point-wise calculations with the ROSFOND library shows a good agreement. The discrepancies between calculation and experimental C/E results in neutron spectra are in the limit of experimental errors. For the photon spectrum they are out of experimental errors. Results of calculations using group-wise and point-wise representation of cross sections show a good agreement both for photon and neutron spectra.

  5. Validation of Analytical Damping Ratio by Fatigue Stress Limit

    NASA Astrophysics Data System (ADS)

    Foong, Faruq Muhammad; Chung Ket, Thein; Beng Lee, Ooi; Aziz, Abdul Rashid Abdul

    2018-03-01

    The optimisation process of a vibration energy harvester is usually restricted to experimental approaches due to the lack of an analytical equation to describe the damping of a system. This study derives an analytical equation, which describes the first mode damping ratio of a clamp-free cantilever beam under harmonic base excitation by combining the transverse equation of motion of the beam with the damping-stress equation. This equation, as opposed to other common damping determination methods, is independent of experimental inputs or finite element simulations and can be solved using a simple iterative convergence method. The derived equation was determined to be correct for cases when the maximum bending stress in the beam is below the fatigue limit stress of the beam. However, an increasing trend in the error between the experiment and the analytical results were observed at high stress levels. Hence, the fatigue limit stress was used as a parameter to define the validity of the analytical equation.

  6. Modeling and characterization of multipath in global navigation satellite system ranging signals

    NASA Astrophysics Data System (ADS)

    Weiss, Jan Peter

    The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.

  7. The techniques of quality operations computational and experimental researches of the launch vehicles in the drawing-board stage

    NASA Astrophysics Data System (ADS)

    Rozhaeva, K.

    2018-01-01

    The aim of the researchis the quality operations of the design process at the stage of research works on the development of active on-Board system of the launch vehicles spent stages descent with liquid propellant rocket engines by simulating the gasification process of undeveloped residues of fuel in the tanks. The design techniques of the gasification process of liquid rocket propellant components residues in the tank to the expense of finding and fixing errors in the algorithm calculation to increase the accuracy of calculation results is proposed. Experimental modelling of the model liquid evaporation in a limited reservoir of the experimental stand, allowing due to the false measurements rejection based on given criteria and detected faults to enhance the results reliability of the experimental studies; to reduce the experiments cost.

  8. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  9. Automated error correction in IBM quantum computer and explicit generalization

    NASA Astrophysics Data System (ADS)

    Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.

    2018-06-01

    Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.

  10. High accuracy switched-current circuits using an improved dynamic mirror

    NASA Technical Reports Server (NTRS)

    Zweigle, G.; Fiez, T.

    1991-01-01

    The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.

  11. Method for the fabrication error calibration of the CGH used in the cylindrical interferometry system

    NASA Astrophysics Data System (ADS)

    Wang, Qingquan; Yu, Yingjie; Mou, Kebing

    2016-10-01

    This paper presents a method of absolutely calibrating the fabrication error of the CGH in the cylindrical interferometry system for the measurement of cylindricity error. First, a simulated experimental system is set up in ZEMAX. On one hand, the simulated experimental system has demonstrated the feasibility of the method we proposed. On the other hand, by changing the different positions of the mirror in the simulated experimental system, a misalignment aberration map, consisting of the different interferograms in different positions, is acquired. And it can be acted as a reference for the experimental adjustment in real system. Second, the mathematical polynomial, which describes the relationship between the misalignment aberrations and the possible misalignment errors, is discussed.

  12. Intracavity adaptive optics. 1: Astigmatism correction performance.

    PubMed

    Spinhirne, J M; Anafi, D; Freeman, R H; Garcia, H R

    1981-03-15

    A detailed experimental study has been conducted on adaptive optical control methodologies inside a laser resonator. A comparison is presented of several optimization techniques using a multidither zonal coherent optical adaptive technique system within a laser resonator for the correction of astigmatism. A dramatic performance difference is observed when optimizing on beam quality compared with optimizing on power-in-the-bucket. Experimental data are also presented on proper selection criteria for dither frequencies when controlling phase front errors. The effects of hardware limitations and design considerations on the performance of the system are presented, and general conclusions and physical interpretations on the results are made when possible.

  13. An Experimental Study of a Six Key Handprint Chord Keyboard.

    DTIC Science & Technology

    1986-05-01

    analysis: sequence time, list time, and errors, is better divided by group of tests, beginning or ending. This division forms a logical outline from which...accomplished pianists . Due to the limited amount of time at the keyboard that volunteers were willing to endure, asymptotic behavior was not reached...considerable attention , and it includes an idea of time 1152 quite different from that enunciated by Newton. According to this theory, 1226 there is no

  14. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  15. Comparison of various error functions in predicting the optimum isotherm by linear and non-linear regression analysis for the sorption of basic red 9 by activated carbon.

    PubMed

    Kumar, K Vasanth; Porkodi, K; Rocha, F

    2008-01-15

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.

  16. Estimation of identification limit for a small-type OSL dosimeter on the medical images by measurement of X-ray spectra.

    PubMed

    Takegami, Kazuki; Hayashi, Hiroaki; Okino, Hiroki; Kimoto, Natsumi; Maehata, Itsumi; Kanazawa, Yuki; Okazaki, Tohru; Hashizume, Takuya; Kobayashi, Ikuo

    2016-07-01

    Our aim in this study is to derive an identification limit on a dosimeter for not disturbing a medical image when patients wear a small-type optically stimulated luminescence (OSL) dosimeter on their bodies during X-ray diagnostic imaging. For evaluation of the detection limit based on an analysis of X-ray spectra, we propose a new quantitative identification method. We performed experiments for which we used diagnostic X-ray equipment, a soft-tissue-equivalent phantom (1-20 cm), and a CdTe X-ray spectrometer assuming one pixel of the X-ray imaging detector. Then, with the following two experimental settings, corresponding X-ray spectra were measured with 40-120 kVp and 0.5-1000 mAs at a source-to-detector distance of 100 cm: (1) X-rays penetrating a soft-tissue-equivalent phantom with the OSL dosimeter attached directly on the phantom, and (2) X-rays penetrating only the soft-tissue-equivalent phantom. Next, the energy fluence and errors in the fluence were calculated from the spectra. When the energy fluence with errors concerning these two experimental conditions was estimated to be indistinctive, we defined the condition as the OSL dosimeter not being identified on the X-ray image. Based on our analysis, we determined the identification limit of the dosimeter. We then compared our results with those for the general irradiation conditions used in clinics. We found that the OSL dosimeter could not be identified under the irradiation conditions of abdominal and chest radiography, namely, one can apply the OSL dosimeter to measurement of the exposure dose in the irradiation field of X-rays without disturbing medical images.

  17. Data Transfer Efficiency Over Satellite Circuits Using a Multi-Socket Extension to the File Transfer Protocol (FTP)

    NASA Technical Reports Server (NTRS)

    Allman, Mark; Ostermann, Shawn; Kruse, Hans

    1996-01-01

    In several experiments using NASA's Advanced Communications Technology Satellite (ACTS), investigators have reported disappointing throughput using the transmission control protocol/Internet protocol (TCP/IP) protocol suite over 1.536Mbit/sec (T1) satellite circuits. A detailed analysis of file transfer protocol (FTP) file transfers reveals that both the TCP window size and the TCP 'slow starter' algorithm contribute to the observed limits in throughput. In this paper we summarize the experimental and and theoretical analysis of the throughput limit imposed by TCP on the satellite circuit. We then discuss in detail the implementation of a multi-socket FTP, XFTP client and server. XFTP has been tested using the ACTS system. Finally, we discuss a preliminary set of tests on a link with non-zero bit error rates. XFTP shows promising performance under these conditions, suggesting the possibility that a multi-socket application may be less effected by bit errors than a single, large-window TCP connection.

  18. 4.5-Gb/s RGB-LED based WDM visible light communication system employing CAP modulation and RLS based adaptive equalization.

    PubMed

    Wang, Yiguang; Huang, Xingxing; Tao, Li; Shi, Jianyang; Chi, Nan

    2015-05-18

    Inter-symbol interference (ISI) is one of the key problems that seriously limit transmission data rate in high-speed VLC systems. To eliminate ISI and further improve the system performance, series of equalization schemes have been widely investigated. As an adaptive algorithm commonly used in wireless communication, RLS is also suitable for visible light communication due to its quick convergence and better performance. In this paper, for the first time we experimentally demonstrate a high-speed RGB-LED based WDM VLC system employing carrier-less amplitude and phase (CAP) modulation and recursive least square (RLS) based adaptive equalization. An aggregate data rate of 4.5Gb/s is successfully achieved over 1.5-m indoor free space transmission with the bit error rate (BER) below the 7% forward error correction (FEC) limit of 3.8x10(-3). To the best of our knowledge, this is the highest data rate ever achieved in RGB-LED based VLC systems.

  19. A simple model for studying rotation errors of gimbal mount axes in laser tracking system based on spherical mirror as a reflection unit

    NASA Astrophysics Data System (ADS)

    Song, Huixu; Shi, Zhaoyao; Chen, Hongfang; Sun, Yanqiang

    2018-01-01

    This paper presents a novel experimental approach and a simple model for verifying that spherical mirror of laser tracking system could lessen the effect of rotation errors of gimbal mount axes based on relative motion thinking. Enough material and evidence are provided to support that this simple model could replace complex optical system in laser tracking system. This experimental approach and model interchange the kinematic relationship between spherical mirror and gimbal mount axes in laser tracking system. Being fixed stably, gimbal mount axes' rotation error motions are replaced by spatial micro-displacements of spherical mirror. These motions are simulated by driving spherical mirror along the optical axis and vertical direction with the use of precision positioning platform. The effect on the laser ranging measurement accuracy of displacement caused by the rotation errors of gimbal mount axes could be recorded according to the outcome of laser interferometer. The experimental results show that laser ranging measurement error caused by the rotation errors is less than 0.1 μm if radial error motion and axial error motion are under 10 μm. The method based on relative motion thinking not only simplifies the experimental procedure but also achieves that spherical mirror owns the ability to reduce the effect of rotation errors of gimbal mount axes in laser tracking system.

  20. [The effectiveness of error reporting promoting strategy on nurse's attitude, patient safety culture, intention to report and reporting rate].

    PubMed

    Kim, Myoungsoo

    2010-04-01

    The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, p<.001). There was no significant difference in some categories for organizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.

  1. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies.

    PubMed

    Brzozek, Christopher; Benke, Kurt K; Zeleke, Berihun M; Abramson, Michael J; Benke, Geza

    2018-03-26

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.

  2. Three-dimensional microscopic deformation measurements on cellular solids.

    PubMed

    Genovese, K

    2016-07-01

    The increasing interest in small-scale problems demands novel experimental protocols providing dense sets of 3D deformation data of complex shaped microstructures. Obtaining such information is particularly significant for the study of natural and engineered cellular solids for which experimental data collected at macro scale and describing the global mechanical response provide only limited information on their function/structure relationship. Cellular solids, in fact, due their superior mechanical performances to a unique arrangement of the bulk material properties (i.e. anisotropy and heterogeneity) and cell structural features (i.e. pores shape, size and distribution) at the micro- and nano-scales. To address the need for full-field experimental data down to the cell level, this paper proposes a single-camera stereo-Digital Image Correlation (DIC) system that makes use of a wedge prism in series to a telecentric lens for performing surface shape and deformation measurements on microstructures in three dimensions. Although the system possesses a limited measurement volume (FOV~2.8×4.3mm(2), error-free DOF ~1mm), large surface areas of cellular samples can be accurately covered by employing a sequential image capturing scheme followed by an optimization-based mosaicing procedure. The basic principles of the proposed method together with the results of the benchmarking of its metrological performances and error analysis are here reported and discussed in detail. Finally, the potential utility of this method is illustrated with micro-resolution three-dimensional measurements on a 3D printed honeycomb and on a block sample of a Luffa sponge under compression. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Internal consistency tests for evaluation of measurements of anthropogenic hydrocarbons in the troposphere

    NASA Astrophysics Data System (ADS)

    Parrish, D. D.; Trainer, M.; Young, V.; Goldan, P. D.; Kuster, W. C.; Jobson, B. T.; Fehsenfeld, F. C.; Lonneman, W. A.; Zika, R. D.; Farmer, C. T.; Riemer, D. D.; Rodgers, M. O.

    1998-09-01

    Measurements of tropospheric nonmethane hydrocarbons (NMHCs) made in continental North America should exhibit a common pattern determined by photochemical removal and dilution acting upon the typical North American urban emissions. We analyze 11 data sets collected in the United States in the context of this hypothesis, in most cases by analyzing the geometric mean and standard deviations of ratios of selected NMHCs. In the analysis we attribute deviations from the common pattern to plausible systematic and random experimental errors. In some cases the errors have been independently verified and the specific causes identified. Thus this common pattern provides a check for internal consistency in NMHC data sets. Specific tests are presented which should provide useful diagnostics for all data sets of anthropogenic NMHC measurements collected in the United States. Similar tests, based upon the perhaps different emission patterns of other regions, presumably could be developed. The specific tests include (1) a lower limit for ethane concentrations, (2) specific NMHCs that should be detected if any are, (3) the relatively constant mean ratios of the longer-lived NMHCs with similar atmospheric lifetimes, (4) the constant relative patterns of families of NMHCs, and (5) limits on the ambient variability of the NMHC ratios. Many experimental problems are identified in the literature and the Southern Oxidant Study data sets. The most important conclusion of this paper is that a rigorous field intercomparison of simultaneous measurements of ambient NMHCs by different techniques and researchers is of crucial importance to the field of atmospheric chemistry. The tests presented here are suggestive of errors but are not definitive; only a field intercomparison can resolve the uncertainties.

  4. Disclosure of Medical Errors: What Factors Influence How Patients Respond?

    PubMed Central

    Mazor, Kathleen M; Reed, George W; Yood, Robert A; Fischer, Melissa A; Baril, Joann; Gurwitz, Jerry H

    2006-01-01

    BACKGROUND Disclosure of medical errors is encouraged, but research on how patients respond to specific practices is limited. OBJECTIVE This study sought to determine whether full disclosure, an existing positive physician-patient relationship, an offer to waive associated costs, and the severity of the clinical outcome influenced patients' responses to medical errors. PARTICIPANTS Four hundred and seven health plan members participated in a randomized experiment in which they viewed video depictions of medical error and disclosure. DESIGN Subjects were randomly assigned to experimental condition. Conditions varied in type of medication error, level of disclosure, reference to a prior positive physician-patient relationship, an offer to waive costs, and clinical outcome. MEASURES Self-reported likelihood of changing physicians and of seeking legal advice; satisfaction, trust, and emotional response. RESULTS Nondisclosure increased the likelihood of changing physicians, and reduced satisfaction and trust in both error conditions. Nondisclosure increased the likelihood of seeking legal advice and was associated with a more negative emotional response in the missed allergy error condition, but did not have a statistically significant impact on seeking legal advice or emotional response in the monitoring error condition. Neither the existence of a positive relationship nor an offer to waive costs had a statistically significant impact. CONCLUSIONS This study provides evidence that full disclosure is likely to have a positive effect or no effect on how patients respond to medical errors. The clinical outcome also influences patients' responses. The impact of an existing positive physician-patient relationship, or of waiving costs associated with the error remains uncertain. PMID:16808770

  5. Experimental search for the violation of Pauli exclusion principle: VIP-2 Collaboration.

    PubMed

    Shi, H; Milotti, E; Bartalucci, S; Bazzi, M; Bertolucci, S; Bragadireanu, A M; Cargnelli, M; Clozza, A; De Paolis, L; Di Matteo, S; Egger, J-P; Elnaggar, H; Guaraldo, C; Iliescu, M; Laubenstein, M; Marton, J; Miliucci, M; Pichler, A; Pietreanu, D; Piscicchia, K; Scordo, A; Sirghi, D L; Sirghi, F; Sperandio, L; Vazquez Doce, O; Widmann, E; Zmeskal, J; Curceanu, C

    2018-01-01

    The VIolation of Pauli exclusion principle -2 experiment, or VIP-2 experiment, at the Laboratori Nazionali del Gran Sasso searches for X-rays from copper atomic transitions that are prohibited by the Pauli exclusion principle. Candidate direct violation events come from the transition of a 2 p electron to the ground state that is already occupied by two electrons. From the first data taking campaign in 2016 of VIP-2 experiment, we determined a best upper limit of [Formula: see text] for the probability that such a violation exists. Significant improvement in the control of the experimental systematics was also achieved, although not explicitly reflected in the improved upper limit. By introducing a simultaneous spectral fit of the signal and background data in the analysis, we succeeded in taking into account systematic errors that could not be evaluated previously in this type of measurements.

  6. Experimental search for the violation of Pauli exclusion principle. VIP-2 Collaboration

    NASA Astrophysics Data System (ADS)

    Shi, H.; Milotti, E.; Bartalucci, S.; Bazzi, M.; Bertolucci, S.; Bragadireanu, A. M.; Cargnelli, M.; Clozza, A.; De Paolis, L.; Di Matteo, S.; Egger, J.-P.; Elnaggar, H.; Guaraldo, C.; Iliescu, M.; Laubenstein, M.; Marton, J.; Miliucci, M.; Pichler, A.; Pietreanu, D.; Piscicchia, K.; Scordo, A.; Sirghi, D. L.; Sirghi, F.; Sperandio, L.; Vazquez Doce, O.; Widmann, E.; Zmeskal, J.; Curceanu, C.

    2018-04-01

    The VIolation of Pauli exclusion principle -2 experiment, or VIP-2 experiment, at the Laboratori Nazionali del Gran Sasso searches for X-rays from copper atomic transitions that are prohibited by the Pauli exclusion principle. Candidate direct violation events come from the transition of a 2 p electron to the ground state that is already occupied by two electrons. From the first data taking campaign in 2016 of VIP-2 experiment, we determined a best upper limit of 3.4 × 10^{-29} for the probability that such a violation exists. Significant improvement in the control of the experimental systematics was also achieved, although not explicitly reflected in the improved upper limit. By introducing a simultaneous spectral fit of the signal and background data in the analysis, we succeeded in taking into account systematic errors that could not be evaluated previously in this type of measurements.

  7. Nematode Damage Functions: The Problems of Experimental and Sampling Error

    PubMed Central

    Ferris, H.

    1984-01-01

    The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865

  8. Simulating a transmon implementation of the surface code, Part I

    NASA Astrophysics Data System (ADS)

    Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo

    Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  9. Computational studies of metal-metal and metal-ligand interactions

    NASA Technical Reports Server (NTRS)

    Barnes, Leslie A.

    1992-01-01

    The geometric structure of Cr(CO)6 is optimized at the modified coupled-pair functional (MCPF), single and double excitation coupled-cluster (CCSD) and CCSD(T) levels of theory (including a perturbational estimate for connected triple excitations), and the force constants for the totally symmetric representation are determined. The geometry of Cr(CO)5 is partially optimized at the MCPF, CCSD and CCSD(T) levels of theory. Comparison with experimental data shows that the CCSD(T) method gives the best results for the structures and force constants, and that remaining errors are probably due to deficiencies in the one-particle basis sets used for CO. A detailed comparison of the properties of free CO is therefore given, at both the MCPF and CCSD/CCSD(T) levels of treatment, using a variety of basis sets. With very large one-particle basis sets, the SSCD(T) method gives excellent results for the bond distance, dipole moment and harmonic frequency of free CO. The total binding energies of Cr(CO)6 and Cr(CO)5 are also determined at the MCPF, CCSD and CCSD(T) levels of theory. The CCSD(T) method gives a much larger total binding energy than either the MCPF or CCSD methods. An analysis of the basis set superposition error (BSSE) at the MCPF level of treatment points out limitations in the one-particle basis used here and in a previous study. Calculations using larger basis sets reduced the BSSE, but the total binding energy of Cr(CO)6 is still significantly smaller than the experimental value, although the first CO bond dissociation energy of Cr(CO)6 is well described. An investigation of 3s3p correlation reveals only a small effect. The remaining discrepancy between the experimental and theoretical total binding energy of Cr(CO)6 is probably due to limitations in the one-particle basis, rather than limitations in the correlation treatment. In particular an additional d function and an f function on each C and O are needed to obtain quantitative results. This is underscored by the fact that even using a very large primitive se (1042 primitive functions contracted to 300 basis functions), the superposition error for the total binding energy of Cr(CO)6 is 22 kcal/mol at the MCPF level of treatment.

  10. Local-search based prediction of medical image registration error

    NASA Astrophysics Data System (ADS)

    Saygili, Görkem

    2018-03-01

    Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.

  11. Phase measurement error in summation of electron holography series.

    PubMed

    McLeod, Robert A; Bergen, Michael; Malac, Marek

    2014-06-01

    Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  12. Accuracy of Area at Risk Quantification by Cardiac Magnetic Resonance According to the Myocardial Infarction Territory.

    PubMed

    Fernández-Friera, Leticia; García-Ruiz, José Manuel; García-Álvarez, Ana; Fernández-Jiménez, Rodrigo; Sánchez-González, Javier; Rossello, Xavier; Gómez-Talavera, Sandra; López-Martín, Gonzalo J; Pizarro, Gonzalo; Fuster, Valentín; Ibáñez, Borja

    2017-05-01

    Area at risk (AAR) quantification is important to evaluate the efficacy of cardioprotective therapies. However, postinfarction AAR assessment could be influenced by the infarcted coronary territory. Our aim was to determine the accuracy of T 2 -weighted short tau triple-inversion recovery (T 2 W-STIR) cardiac magnetic resonance (CMR) imaging for accurate AAR quantification in anterior, lateral, and inferior myocardial infarctions. Acute reperfused myocardial infarction was experimentally induced in 12 pigs, with 40-minute occlusion of the left anterior descending (n = 4), left circumflex (n = 4), and right coronary arteries (n = 4). Perfusion CMR was performed during selective intracoronary gadolinium injection at the coronary occlusion site (in vivo criterion standard) and, additionally, a 7-day CMR, including T 2 W-STIR sequences, was performed. Finally, all animals were sacrificed and underwent postmortem Evans blue staining (classic criterion standard). The concordance between the CMR-based criterion standard and T 2 W-STIR to quantify AAR was high for anterior and inferior infarctions (r = 0.73; P = .001; mean error = 0.50%; limits = -12.68%-13.68% and r = 0.87; P = .001; mean error = -1.5%; limits = -8.0%-5.8%, respectively). Conversely, the correlation for the circumflex territories was poor (r = 0.21, P = .37), showing a higher mean error and wider limits of agreement. A strong correlation between pathology and the CMR-based criterion standard was observed (r = 0.84, P < .001; mean error = 0.91%; limits = -7.55%-9.37%). T 2 W-STIR CMR sequences are accurate to determine the AAR for anterior and inferior infarctions; however, their accuracy for lateral infarctions is poor. These findings may have important implications for the design and interpretation of clinical trials evaluating the effectiveness of cardioprotective therapies. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  13. Experimental test of the variability of G using Viking lander ranging data

    NASA Technical Reports Server (NTRS)

    Hellings, R. W.; Adams, P. J.; Anderson, J. D.; Keesey, M. S.; Lau, E. L.; Standish, E. M.; Canuto, V. M.; Goldman, I.

    1983-01-01

    Results are presented from the analysis of solar-system astrometric data, notably the range data to the Viking landers on Mars. A least-squares fit of the parameters of the solar system model to these data limits a simple time variation in the effective Newtonian gravitational constant to (2 + or - 4) x 10 to the -12th/yr and a rate of drift of atomic clocks relative to the implicit clock of relativistic dynamics to (1 + or - 8) x 10 to the -12th/yr. The error limits quoted are the result of uncertainties in the masses of the asteroids.

  14. Testing accelerometer rectification error caused by multidimensional composite inputs with double turntable centrifuge.

    PubMed

    Guan, W; Meng, X F; Dong, X M

    2014-12-01

    Rectification error is a critical characteristic of inertial accelerometers. Accelerometers working in operational situations are stimulated by composite inputs, including constant acceleration and vibration, from multiple directions. However, traditional methods for evaluating rectification error only use one-dimensional vibration. In this paper, a double turntable centrifuge (DTC) was utilized to produce the constant acceleration and vibration simultaneously and we tested the rectification error due to the composite accelerations. At first, we deduced the expression of the rectification error with the output of the DTC and a static model of the single-axis pendulous accelerometer under test. Theoretical investigation and analysis were carried out in accordance with the rectification error model. Then a detailed experimental procedure and testing results were described. We measured the rectification error with various constant accelerations at different frequencies and amplitudes of the vibration. The experimental results showed the distinguished characteristics of the rectification error caused by the composite accelerations. The linear relation between the constant acceleration and the rectification error was proved. The experimental procedure and results presented in this context can be referenced for the investigation of the characteristics of accelerometer with multiple inputs.

  15. Measurement of Hubble constant: non-Gaussian errors in HST Key Project data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Meghendra; Gupta, Shashikant; Pandey, Ashwini

    2016-08-01

    Assuming the Central Limit Theorem, experimental uncertainties in any data set are expected to follow the Gaussian distribution with zero mean. We propose an elegant method based on Kolmogorov-Smirnov statistic to test the above; and apply it on the measurement of Hubble constant which determines the expansion rate of the Universe. The measurements were made using Hubble Space Telescope. Our analysis shows that the uncertainties in the above measurement are non-Gaussian.

  16. Human-robot cooperative movement training: Learning a novel sensory motor transformation during walking with robotic assistance-as-needed

    PubMed Central

    Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J

    2007-01-01

    Background A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Methods Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. Results We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. Conclusion The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury. PMID:17391527

  17. Estimation of lower flammability limits of C-H compounds in air at atmospheric pressure, evaluation of temperature dependence and diluent effect.

    PubMed

    Mendiburu, Andrés Z; de Carvalho, João A; Coronado, Christian R

    2015-03-21

    Estimation of the lower flammability limits of C-H compounds at 25 °C and 1 atm; at moderate temperatures and in presence of diluent was the objective of this study. A set of 120 C-H compounds was divided into a correlation set and a prediction set of 60 compounds each. The absolute average relative error for the total set was 7.89%; for the correlation set, it was 6.09%; and for the prediction set it was 9.68%. However, it was shown that by considering different sources of experimental data the values were reduced to 6.5% for the prediction set and to 6.29% for the total set. The method showed consistency with Le Chatelier's law for binary mixtures of C-H compounds. When tested for a temperature range from 5 °C to 100 °C, the absolute average relative errors were 2.41% for methane; 4.78% for propane; 0.29% for iso-butane and 3.86% for propylene. When nitrogen was added, the absolute average relative errors were 2.48% for methane; 5.13% for propane; 0.11% for iso-butane and 0.15% for propylene. When carbon dioxide was added, the absolute relative errors were 1.80% for methane; 5.38% for propane; 0.86% for iso-butane and 1.06% for propylene. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.

    2008-04-01

    Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.

  19. Testing the effect of computer-generated hologram fabrication error in a cylindrical interferometry system

    NASA Astrophysics Data System (ADS)

    Wang, Qingquan; Yu, Yingjie; Mou, Kebing

    2017-10-01

    This paper presents a method of testing the effect of computer-generated hologram (CGH) fabrication error in a cylindrical interferometry system. An experimental system is developed for calibrating the effect of this error. In the calibrating system, a mirror with high surface accuracy is placed at the focal axis of the cylindrical wave. After transmitting through the CGH, the reflected cylindrical wave can be transformed into a plane wave again, and then the plane wave interferes with the reference plane wave. Finally, the double-pass transmitted wavefront of the CGH, representing the effect of the CGH fabrication error in the experimental system, is obtained by analyzing the interferogram. The mathematical model of misalignment aberration removal in the calibration system is described, and the feasibility is demonstrated via the simulation system established in Zemax. With the mathematical polynomial, most of the possible misalignment errors can be estimated with the least-squares fitting algorithm, and then the double-pass transmitted wavefront of the CGH can be obtained by subtracting the misalignment errors from the result extracted from the real experimental system. Compared to the standard double-pass transmitted wavefront given by Diffraction International Ltd., which manufactured the CGH used in the experimental system, the result is desirable. We conclude that the proposed method is effective in calibrating the effect of the CGH error in the cylindrical interferometry system for the measurement of cylindricity error.

  20. Precision of a CAD/CAM-engineered surgical template based on a facebow for orthognathic surgery: an experiment with a rapid prototyping maxillary model.

    PubMed

    Lee, Jae-Won; Lim, Se-Ho; Kim, Moon-Key; Kang, Sang-Hoon

    2015-12-01

    We examined the precision of a computer-aided design/computer-aided manufacturing-engineered, manufactured, facebow-based surgical guide template (facebow wafer) by comparing it with a bite splint-type orthognathic computer-aided design/computer-aided manufacturing-engineered surgical guide template (bite wafer). We used 24 rapid prototyping (RP) models of the craniofacial skeleton with maxillary deformities. Twelve RP models each were used for the facebow wafer group and the bite wafer group (experimental group). Experimental maxillary orthognathic surgery was performed on the RP models of both groups. Errors were evaluated through comparisons with surgical simulations. We measured the minimum distances from 3 planes of reference to determine the vertical, lateral, and anteroposterior errors at specific measurement points. The measured errors were compared between experimental groups using a t test. There were significant intergroup differences in the lateral error when we compared the absolute values of the 3-D linear distance, as well as vertical, lateral, and anteroposterior errors between experimental groups. The bite wafer method exhibited little lateral error overall and little error in the anterior tooth region. The facebow wafer method exhibited very little vertical error in the posterior molar region. The clinical precision of the facebow wafer method did not significantly exceed that of the bite wafer method. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Zero Thermal Noise in Resistors at Zero Temperature

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes-Göran

    2016-06-01

    The bandwidth of transistors in logic devices approaches the quantum limit, where Johnson noise and associated error rates are supposed to be strongly enhanced. However, the related theory — asserting a temperature-independent quantum zero-point (ZP) contribution to Johnson noise, which dominates the quantum regime — is controversial and resolution of the controversy is essential to determine the real error rate and fundamental energy dissipation limits of logic gates in the quantum limit. The Callen-Welton formula (fluctuation-dissipation theorem) of voltage and current noise for a resistance is the sum of Nyquist’s classical Johnson noise equation and a quantum ZP term with a power density spectrum proportional to frequency and independent of temperature. The classical Johnson-Nyquist formula vanishes at the approach of zero temperature, but the quantum ZP term still predicts non-zero noise voltage and current. Here, we show that this noise cannot be reconciled with the Fermi-Dirac distribution, which defines the thermodynamics of electrons according to quantum-statistical physics. Consequently, Johnson noise must be nil at zero temperature, and non-zero noise found for certain experimental arrangements may be a measurement artifact, such as the one mentioned in Kleen’s uncertainty relation argument.

  2. Time-dependent compressibility of poly (methyl methacrylate) (PMMA) : an experimental and molecular dynamics investigation

    NASA Astrophysics Data System (ADS)

    Sane, Sandeep Bhalchandra

    This thesis contains three chapters, which describe different aspects of an investigation of the bulk response of Poly(Methyl Methacrylate) (PMMA). The first chapter describes the physical measurements by means of a Belcher/McKinney-type apparatus. Used earlier for the measurement of the bulk response of Poly(Vinyl Acetate), it was now adapted for making measurements at higher temperatures commensurate with the glass transition temperature of PMMA. The dynamic bulk compliance of PMMA was measured at atmospheric pressure over a wide range of temperatures and frequencies, from which the master curves for the bulk compliance were generated by means of the time-temperature superposition principle. It was found that the extent of the transition ranges for the bulk and shear response were comparable. Comparison of the shift factors for bulk and shear responses supports the idea that different molecular mechanisms contribute to shear and bulk deformations. The second chapter delineates molecular dynamics computations for the bulk response for a range of pressures and temperatures. The model(s) consisted of 2256 atoms formed into three polymer chains with fifty monomer units per chain per unit cell. The time scales accessed were limited to tens of pico seconds. It was found that, in addition to the typical energy minimization and temperature annealing cycles for establishing equilibrium models, it is advantageous to subject the model samples to a cycle of relatively large pressures (GPa-range) for improving the equilibrium state. On comparing the computations with the experimentally determined "glassy" behavior, one finds that, although the computations were limited to small samples in a physical sense, the primary limitation rests in the very short times (pico seconds). The molecular dynamics computations do not model the physically observed temperature sensitivity of PMMA, even if one employs a hypothetical time-temperature shift to account for the large difference in time scales between experiment and computation. The values computed by the molecular dynamics method do agree with the values measured at the coldest temperature and at the highest frequency of one kiloHertz. The third chapter draws on measurements of uniaxial, shear and Poisson response conducted previously in our laboratory. With the availability of four time or frequency-dependent material functions for the same material, the process of interconversion between different material functions was investigated. Computed material functions were evaluated against the direct experimental measurements and the limitations imposed on successful interconversion due to the experimental errors in the underlying physical data were explored. Differences were observed that are larger than the experimental errors would suggest.

  3. Contextual Advantage for State Discrimination

    NASA Astrophysics Data System (ADS)

    Schmid, David; Spekkens, Robert W.

    2018-02-01

    Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.

  4. Label consistent K-SVD: learning a discriminative dictionary for recognition.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2013-11-01

    A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called "discriminative sparse-code error" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.

  5. A highly accurate ab initio potential energy surface for methane.

    PubMed

    Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-14

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  6. Set membership experimental design for biological systems.

    PubMed

    Marvel, Skylar W; Williams, Cranos M

    2012-03-21

    Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.

  7. Set membership experimental design for biological systems

    PubMed Central

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240

  8. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  9. The effectiveness of the error reporting promoting program on the nursing error incidence rate in Korean operating rooms.

    PubMed

    Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung

    2007-03-01

    The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.

  10. Evaluation of solvation free energies for small molecules with the AMOEBA polarizable force field

    PubMed Central

    Mohamed, Noor Asidah; Bradshaw, Richard T.

    2016-01-01

    The effects of electronic polarization in biomolecular interactions will differ depending on the local dielectric constant of the environment, such as in solvent, DNA, proteins, and membranes. Here the performance of the AMOEBA polarizable force field is evaluated under nonaqueous conditions by calculating the solvation free energies of small molecules in four common organic solvents. Results are compared with experimental data and equivalent simulations performed with the GAFF pairwise‐additive force field. Although AMOEBA results give mean errors close to “chemical accuracy,” GAFF performs surprisingly well, with statistically significantly more accurate results than AMOEBA in some solvents. However, for both models, free energies calculated in chloroform show worst agreement to experiment and individual solutes are consistently poor performers, suggesting non‐potential‐specific errors also contribute to inaccuracy. Scope for the improvement of both potentials remains limited by the lack of high quality experimental data across multiple solvents, particularly those of high dielectric constant. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:27757978

  11. Comparison of two surface temperature measurement using thermocouples and infrared camera

    NASA Astrophysics Data System (ADS)

    Michalski, Dariusz; Strąk, Kinga; Piasecka, Magdalena

    This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.

  12. Study of SPM tolerances of electronically compensated DML based systems.

    PubMed

    Papagiannakis, I; Klonidis, D; Birbas, Alexios N; Kikidis, J; Tomkos, I

    2009-05-25

    This paper experimentally investigates the effectiveness of electronic dispersion compensation (EDC) for signals limited by self phase modulation (SPM) and various dispersion levels. The sources considered are low-cost conventional directly modulated lasers (DMLs), fabricated for operation at 2.5 Gb/s but modulated at 10 Gb/s. Performance improvement is achieved by means of electronic feed-forward and decision-feedback equalization (FFE/DFE) at the receiver end. Experimental studies consider both transient and adiabatic chirp dominated DMLs sources. The improvement is evaluated in terms of required optical signal-to-noise ratio (ROSNR) for bit-error-rate (BER) values of 10(-3) versus launch power over uncompensated links of standard single mode fiber (SSMF).

  13. Quantification of the Uncertainties for the Ares I A106 Ascent Aerodynamic Database

    NASA Technical Reports Server (NTRS)

    Houlden, Heather P.; Favaregh, Amber L.

    2010-01-01

    A detailed description of the quantification of uncertainties for the Ares I ascent aero 6-DOF wind tunnel database is presented. The database was constructed from wind tunnel test data and CFD results. The experimental data came from tests conducted in the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The major sources of error for this database were: experimental error (repeatability), database modeling errors, and database interpolation errors.

  14. The Propagation of Errors in Experimental Data Analysis: A Comparison of Pre-and Post-Test Designs

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2013-01-01

    Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…

  15. Experimental Demonstration of Fault-Tolerant State Preparation with Superconducting Qubits.

    PubMed

    Takita, Maika; Cross, Andrew W; Córcoles, A D; Chow, Jerry M; Gambetta, Jay M

    2017-11-03

    Robust quantum computation requires encoding delicate quantum information into degrees of freedom that are hard for the environment to change. Quantum encodings have been demonstrated in many physical systems by observing and correcting storage errors, but applications require not just storing information; we must accurately compute even with faulty operations. The theory of fault-tolerant quantum computing illuminates a way forward by providing a foundation and collection of techniques for limiting the spread of errors. Here we implement one of the smallest quantum codes in a five-qubit superconducting transmon device and demonstrate fault-tolerant state preparation. We characterize the resulting code words through quantum process tomography and study the free evolution of the logical observables. Our results are consistent with fault-tolerant state preparation in a protected qubit subspace.

  16. Criteria for the use of regression analysis for remote sensing of sediment and pollutants

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R.

    1982-01-01

    An examination of limitations, requirements, and precision of the linear multiple-regression technique for quantification of marine environmental parameters is conducted. Both environmental and optical physics conditions have been defined for which an exact solution to the signal response equations is of the same form as the multiple regression equation. Various statistical parameters are examined to define a criteria for selection of an unbiased fit when upwelled radiance values contain error and are correlated with each other. Field experimental data are examined to define data smoothing requirements in order to satisfy the criteria of Daniel and Wood (1971). Recommendations are made concerning improved selection of ground-truth locations to maximize variance and to minimize physical errors associated with the remote sensing experiment.

  17. Multistage classification of multispectral Earth observational data: The design approach

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Muasher, M. J.; Landgrebe, D. A.

    1981-01-01

    An algorithm is proposed which predicts the optimal features at every node in a binary tree procedure. The algorithm estimates the probability of error by approximating the area under the likelihood ratio function for two classes and taking into account the number of training samples used in estimating each of these two classes. Some results on feature selection techniques, particularly in the presence of a very limited set of training samples, are presented. Results comparing probabilities of error predicted by the proposed algorithm as a function of dimensionality as compared to experimental observations are shown for aircraft and LANDSAT data. Results are obtained for both real and simulated data. Finally, two binary tree examples which use the algorithm are presented to illustrate the usefulness of the procedure.

  18. Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin

    2018-04-01

    ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.

  19. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  20. Experimental study of an adaptive CFRC reflector for high order wave-front error correction

    NASA Astrophysics Data System (ADS)

    Lan, Lan; Fang, Houfei; Wu, Ke; Jiang, Shuidong; Zhou, Yang

    2018-03-01

    The recent radio frequency communication system developments are generating the need for creating space antennas with lightweight and high precision. The carbon fiber reinforced composite (CFRC) materials have been used to manufacture the high precision reflector. The wave-front errors caused by fabrication and on-orbit distortion are inevitable. The adaptive CFRC reflector has received much attention to do the wave-front error correction. Due to uneven stress distribution that is introduced by actuation force and fabrication, the high order wave-front errors such as print-through error is found on the reflector surface. However, the adaptive CFRC reflector with PZT actuators basically has no control authority over the high order wave-front errors. A new design architecture assembled secondary ribs at the weak triangular surfaces is presented in this paper. The virtual experimental study of the new adaptive CFRC reflector has conducted. The controllability of the original adaptive CFRC reflector and the new adaptive CFRC reflector with secondary ribs are investigated. The virtual experimental investigation shows that the new adaptive CFRC reflector is feasible and efficient to diminish the high order wave-front error.

  1. High Resolution Melting (HRM) for High-Throughput Genotyping-Limitations and Caveats in Practical Case Studies.

    PubMed

    Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz; Strapagiel, Dominik

    2017-11-03

    High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup.

  2. High Resolution Melting (HRM) for High-Throughput Genotyping—Limitations and Caveats in Practical Case Studies

    PubMed Central

    Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz

    2017-01-01

    High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup. PMID:29099791

  3. Reflotron cholesterol measurement in general practice: accuracy and detection of errors.

    PubMed

    Ball, M J; Robertson, I K; Woods, M

    1994-11-01

    Comparison of cholesterol determinations by nurses using a Reflotron analyser in a general practice setting showed a good correlation with plasma cholesterol determinations by wet chemistry in a clinical biochemistry laboratory. A limited number of comparisons did, however, give a much lower result on the Reflotron. In an experimental situation, small sample volumes (which could result from poor technique) were shown to produce falsely low readings. A simple method which may immediately detect falsely low Reflotron readings is discussed.

  4. Translation position determination in ptychographic coherent diffraction imaging.

    PubMed

    Zhang, Fucai; Peterson, Isaac; Vila-Comamala, Joan; Diaz, Ana; Berenguer, Felisa; Bean, Richard; Chen, Bo; Menzel, Andreas; Robinson, Ian K; Rodenburg, John M

    2013-06-03

    Accurate knowledge of translation positions is essential in ptychography to achieve a good image quality and the diffraction limited resolution. We propose a method to retrieve and correct position errors during the image reconstruction iterations. Sub-pixel position accuracy after refinement is shown to be achievable within several tens of iterations. Simulation and experimental results for both optical and X-ray wavelengths are given. The method improves both the quality of the retrieved object image and relaxes the position accuracy requirement while acquiring the diffraction patterns.

  5. Reactions of Fe+ and FeO+ with C2H2, C2H4, and C2H6: Temperature-Dependent Kinetics

    DTIC Science & Technology

    2013-09-12

    studies to lead to the development of efficient quantum chemical calculation methods by offering benchmarks for testing and refinement. Due to the...EXPERIMENTAL METHODS All measurements were performed on the Air Force Research Laboratory’s variable temperature selected ion flow tube (VT- SIFT) instrument...correct within error, indicating that they are in the low-pressure limit,52,53 and the termolecular rate constant is obtained from the slope. In contrast

  6. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  7. Study of an instrument for sensing errors in a telescope wavefront

    NASA Technical Reports Server (NTRS)

    Golden, L. J.; Shack, R. V.; Slater, D. N.

    1973-01-01

    Partial results are presented of theoretical and experimental investigations of different focal plane sensor configurations for determining the error in a telescope wavefront. The coarse range sensor and fine range sensors are used in the experimentation. The design of a wavefront error simulator is presented along with the Hartmann test, the shearing polarization interferometer, the Zernike test, and the Zernike polarization test.

  8. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  9. Experimental demonstration of laser tomographic adaptive optics on a 30-meter telescope at 800 nm

    NASA Astrophysics Data System (ADS)

    Ammons, S., Mark; Johnson, Luke; Kupke, Renate; Gavel, Donald T.; Max, Claire E.

    2010-07-01

    A critical goal in the next decade is to develop techniques that will extend Adaptive Optics correction to visible wavelengths on Extremely Large Telescopes (ELTs). We demonstrate in the laboratory the highly accurate atmospheric tomography necessary to defeat the cone effect on ELTs, an essential milestone on the path to this capability. We simulate a high-order Laser Tomographic AO System for a 30-meter telescope with the LTAO/MOAO testbed at UCSC. Eight Sodium Laser Guide Stars (LGSs) are sensed by 99x99 Shack-Hartmann wavefront sensors over 75". The AO system is diffraction-limited at a science wavelength of 800 nm (S ~ 6-9%) over a field of regard of 20" diameter. Openloop WFS systematic error is observed to be proportional to the total input atmospheric disturbance and is nearly the dominant error budget term (81 nm RMS), exceeded only by tomographic wavefront estimation error (92 nm RMS). The total residual wavefront error for this experiment is comparable to that expected for wide-field tomographic adaptive optics systems of similar wavefront sensor order and LGS constellation geometry planned for Extremely Large Telescopes.

  10. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies

    PubMed Central

    Zeleke, Berihun M.; Abramson, Michael J.; Benke, Geza

    2018-01-01

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship. PMID:29587425

  11. Enhanced storage capacity with errors in scale-free Hopfield neural networks: An analytical study.

    PubMed

    Kim, Do-Hyun; Park, Jinha; Kahng, Byungnam

    2017-01-01

    The Hopfield model is a pioneering neural network model with associative memory retrieval. The analytical solution of the model in mean field limit revealed that memories can be retrieved without any error up to a finite storage capacity of O(N), where N is the system size. Beyond the threshold, they are completely lost. Since the introduction of the Hopfield model, the theory of neural networks has been further developed toward realistic neural networks using analog neurons, spiking neurons, etc. Nevertheless, those advances are based on fully connected networks, which are inconsistent with recent experimental discovery that the number of connections of each neuron seems to be heterogeneous, following a heavy-tailed distribution. Motivated by this observation, we consider the Hopfield model on scale-free networks and obtain a different pattern of associative memory retrieval from that obtained on the fully connected network: the storage capacity becomes tremendously enhanced but with some error in the memory retrieval, which appears as the heterogeneity of the connections is increased. Moreover, the error rates are also obtained on several real neural networks and are indeed similar to that on scale-free model networks.

  12. An Experimental Study of Medical Error Explanations: Do Apology, Empathy, Corrective Action, and Compensation Alter Intentions and Attitudes?

    PubMed

    Nazione, Samantha; Pace, Kristin

    2015-01-01

    Medical malpractice lawsuits are a growing problem in the United States, and there is much controversy regarding how to best address this problem. The medical error disclosure framework suggests that apologizing, expressing empathy, engaging in corrective action, and offering compensation after a medical error may improve the provider-patient relationship and ultimately help reduce the number of medical malpractice lawsuits patients bring to medical providers. This study provides an experimental examination of the medical error disclosure framework and its effect on amount of money requested in a lawsuit, negative intentions, attitudes, and anger toward the provider after a medical error. Results suggest empathy may play a large role in providing positive outcomes after a medical error.

  13. Limitations Of The Current State Space Modelling Approach In Multistage Machining Processes Due To Operation Variations

    NASA Astrophysics Data System (ADS)

    Abellán-Nebot, J. V.; Liu, J.; Romero, F.

    2009-11-01

    The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.

  14. Development of a Hard X-ray Beam Position Monitor for Insertion Device Beams at the APS

    NASA Astrophysics Data System (ADS)

    Decker, Glenn; Rosenbaum, Gerd; Singh, Om

    2006-11-01

    Long-term pointing stability requirements at the Advanced Photon Source (APS) are very stringent, at the level of 500 nanoradians peak-to-peak or better over a one-week time frame. Conventional rf beam position monitors (BPMs) close to the insertion device source points are incapable of assuring this level of stability, owing to mechanical, thermal, and electronic stability limitations. Insertion device gap-dependent systematic errors associated with the present ultraviolet photon beam position monitors similarly limit their ability to control long-term pointing stability. We report on the development of a new BPM design sensitive only to hard x-rays. Early experimental results will be presented.

  15. Multi-interface level in oil tanks and applications of optical fiber sensors

    NASA Astrophysics Data System (ADS)

    Leal-Junior, Arnaldo G.; Marques, Carlos; Frizera, Anselmo; Pontes, Maria José

    2018-01-01

    On the oil production also involves the production of water, gas and suspended solids, which are separated from the oil on three-phase separators. However, the control strategies of an oil separator are limited due to unavailability of suitable multi-interface level sensors. This paper presents a description of the multi-phase level problem on the oil industry and a review of the current technologies for multi-interface level assessment. Since optical fiber sensors present chemical stability, intrinsic safety, electromagnetic immunity, lightweight and multiplexing capabilities, it can be an alternative for multi-interface level measurement that can overcome some of the limitations of the current technologies. For this reason, Fiber Bragg Gratings (FBGs) based optical fiber sensor system for multi-interface level assessment is proposed, simulated and experimentally assessed. The results show that the proposed sensor system is capable of measuring interface level with a relative error of only 2.38%. Furthermore, the proposed sensor system is also capable of measuring the oil density with an error of 0.8 kg/m3.

  16. Integrated fiber optical receiver reducing the gap to the quantum limit.

    PubMed

    Zimmermann, Horst; Steindl, Bernhard; Hofbauer, Michael; Enne, Reinhard

    2017-06-01

    Experimental results of a single-photon avalanche diode (SPAD) based optical fiber receiver integrated in 0.35 µm PIN-photodiode CMOS technology are presented. To cope with the parasitic effects of SPADs an array of four receivers is implemented. The SPADs consist of a multiplication zone and a separate thick absorption zone to achieve a high photon detection probability (PDP). In addition cascoded quenchers allow to use a quenching voltage of twice the usual supply voltage, i.e. 6.6 V instead of 3.3 V, in order to increase the PDP further. Measurements result in sensitivities of -55.7 dBm at a data rate of 50 Mbit/s and -51.6 dBm at 100 Mbit/s for a wavelength of 635 nm and a bit-error ratio of 2 × 10 -3 , which is sufficient to perform error correction. These sensitivities are better than those of linear-mode APD receivers integrated in the same CMOS technology. These results are a major advance towards direct detection optical receivers working close to the quantum limit.

  17. Predicting crystalline lens fall caused by accommodation from changes in wavefront error

    PubMed Central

    He, Lin; Applegate, Raymond A.

    2011-01-01

    PURPOSE To illustrate and develop a method for estimating crystalline lens decentration as a function of accommodative response using changes in wavefront error and show the method and limitations using previously published data (2004) from 2 iridectomized monkey eyes so that clinicians understand how spherical aberration can induce coma, in particular in intraocular lens surgery. SETTINGS College of Optometry, University of Houston, Houston, USA. DESIGN Evaluation of diagnostic test or technology. METHODS Lens decentration was estimated by displacing downward the wavefront error of the lens with respect to the limiting aperture (7.0 mm) and ocular first surface wavefront error for each accommodative response (0.00 to 11.00 diopters) until measured values of vertical coma matched previously published experimental data (2007). Lens decentration was also calculated using an approximation formula that only included spherical aberration and vertical coma. RESULTS The change in calculated vertical coma was consistent with downward lens decentration. Calculated downward lens decentration peaked at approximately 0.48 mm of vertical decentration in the right eye and approximately 0.31 mm of decentration in the left eye using all Zernike modes through the 7th radial order. Calculated lens decentration using only coma and spherical aberration formulas was peaked at approximately 0.45 mm in the right eye and approximately 0.23 mm in the left eye. CONCLUSIONS Lens fall as a function of accommodation was quantified noninvasively using changes in vertical coma driven principally by the accommodation-induced changes in spherical aberration. The newly developed method was valid for a large pupil only. PMID:21700108

  18. Waterbodies Extraction from LANDSAT8-OLI Imagery Using Awater Indexs-Guied Stochastic Fully-Connected Conditional Random Field Model and the Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Wang, X.; Xu, L.

    2018-04-01

    One of the most important applications of remote sensing classification is water extraction. The water index (WI) based on Landsat images is one of the most common ways to distinguish water bodies from other land surface features. But conventional WI methods take into account spectral information only form a limited number of bands, and therefore the accuracy of those WI methods may be constrained in some areas which are covered with snow/ice, clouds, etc. An accurate and robust water extraction method is the key to the study at present. The support vector machine (SVM) using all bands spectral information can reduce for these classification error to some extent. Nevertheless, SVM which barely considers spatial information is relatively sensitive to noise in local regions. Conditional random field (CRF) which considers both spatial information and spectral information has proven to be able to compensate for these limitations. Hence, in this paper, we develop a systematic water extraction method by taking advantage of the complementarity between the SVM and a water index-guided stochastic fully-connected conditional random field (SVM-WIGSFCRF) to address the above issues. In addition, we comprehensively evaluate the reliability and accuracy of the proposed method using Landsat-8 operational land imager (OLI) images of one test site. We assess the method's performance by calculating the following accuracy metrics: Omission Errors (OE) and Commission Errors (CE); Kappa coefficient (KP) and Total Error (TE). Experimental results show that the new method can improve target detection accuracy under complex and changeable environments.

  19. Study of Systems Using Inertia Wheels for Precise Attitude Control of a Satellite

    NASA Technical Reports Server (NTRS)

    White, John S.; Hansen, Q. Marion

    1961-01-01

    Systems using inertia wheels are evaluated in this report to determine their suitability for precise attitude control of a satellite and to select superior system configurations. Various possible inertia wheel system configurations are first discussed in a general manner. Three of these systems which appear more promising than the others are analyzed in detail, using the Orbiting Astronomical Observatory as an example. The three systems differ from each other only by the method of damping, which is provided by either a rate gyro, an error-rate network, or a tachometer in series with a high-pass filter. An analytical investigation which consists of a generalized linear analysis, a nonlinear analysis using the switching-time method, and an analog computer study shows that all three systems are theoretically capable of producing adequate response and also of maintaining the required pointing accuracy for the Orbiting Astronomical Observatory of plus or minus 0.1 second of arc. Practical considerations and an experimental investigation show, however, that the system which uses an error-rate network to provide damping is superior to the other two systems. The system which uses a rate gyro is shown to be inferior because the threshold level causes a significant amount of limit-cycle operation, and the system which uses a tachometer with a filter is shown to be inferior because a device with the required dynamic range of operation does not appear to be available. The experimental laboratory apparatus used to investigate the dynamic performance of the systems is described, and experimental results are included to show that under laboratory conditions with relatively large extraneous disturbances, a dynamic tracking error of less than plus or minus 0.5 second of arc was obtained.

  20. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL... § 1605.22 Claims for correction of Board or TSP record keeper errors; time limitations. (a) Filing claims... after that time, the Board or TSP record keeper may use its sound discretion in deciding whether to...

  1. Gigabit free-space multi-level signal transmission with a mid-infrared quantum cascade laser operating at room temperature.

    PubMed

    Pang, Xiaodan; Ozolins, Oskars; Schatz, Richard; Storck, Joakim; Udalcovs, Aleksejs; Navarro, Jaime Rodrigo; Kakkar, Aditya; Maisons, Gregory; Carras, Mathieu; Jacobsen, Gunnar; Popov, Sergei; Lourdudoss, Sebastian

    2017-09-15

    Gigabit free-space transmissions are experimentally demonstrated with a quantum cascaded laser (QCL) emitting at mid-wavelength infrared of 4.65 μm, and a commercial infrared photovoltaic detector. The QCL operating at room temperature is directly modulated using on-off keying and, for the first time, to the best of our knowledge, four- and eight-level pulse amplitude modulations (PAM-4, PAM-8). By applying pre- and post-digital equalizations, we achieve up to 3  Gbit/s line data rate in all three modulation configurations with a bit error rate performance of below the 7% overhead hard decision forward error correction limit of 3.8×10 -3 . The proposed transmission link also shows a stable operational performance in the lab environment.

  2. A statistical analysis of RNA folding algorithms through thermodynamic parameter perturbation.

    PubMed

    Layton, D M; Bundschuh, R

    2005-01-01

    Computational RNA secondary structure prediction is rather well established. However, such prediction algorithms always depend on a large number of experimentally measured parameters. Here, we study how sensitive structure prediction algorithms are to changes in these parameters. We found already that for changes corresponding to the actual experimental error to which these parameters have been determined, 30% of the structure are falsely predicted whereas the ground state structure is preserved under parameter perturbation in only 5% of all the cases. We establish that base-pairing probabilities calculated in a thermal ensemble are viable although not a perfect measure for the reliability of the prediction of individual structure elements. Here, a new measure of stability using parameter perturbation is proposed, and its limitations are discussed.

  3. Experimental verification of long-term evolution radio transmissions over dual-polarization combined fiber and free-space optics optical infrastructures.

    PubMed

    Bohata, J; Zvanovec, S; Pesek, P; Korinek, T; Mansour Abadi, M; Ghassemlooy, Z

    2016-03-10

    This paper describes the experimental verification of the utilization of long-term evolution radio over fiber (RoF) and radio over free space optics (RoFSO) systems using dual-polarization signals for cloud radio access network applications determining the specific utilization limits. A number of free space optics configurations are proposed and investigated under different atmospheric turbulence regimes in order to recommend the best setup configuration. We show that the performance of the proposed link, based on the combination of RoF and RoFSO for 64 QAM at 2.6 GHz, is more affected by the turbulence based on the measured difference error vector magnitude value of 5.5%. It is further demonstrated the proposed systems can offer higher noise immunity under particular scenarios with the signal-to-noise ratio reliability limit of 5 dB in the radio frequency domain for RoF and 19.3 dB in the optical domain for a combination of RoF and RoFSO links.

  4. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    PubMed

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  5. A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding

    NASA Astrophysics Data System (ADS)

    Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui

    2016-02-01

    In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within  ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.

  6. The Influence of Guided Error-Based Learning on Motor Skills Self-Efficacy and Achievement.

    PubMed

    Chien, Kuei-Pin; Chen, Sufen

    2018-01-01

    The authors investigated the role of errors in motor skills teaching, specifically the influence of errors on skills self-efficacy and achievement. The participants were 75 undergraduate students enrolled in pétanque courses. The experimental group (guided error-based learning, n = 37) received a 6-week period of instruction based on the students' errors, whereas the control group (correct motion instruction, n = 38) received a 6-week period of instruction emphasizing correct motor skills. The experimental group had significantly higher scores in motor skills self-efficacy and outcomes than did the control group. Novices' errors reflect their schema in motor skills learning, which provides a basis for instructors to implement student-centered instruction and to facilitate the learning process. Guided error-based learning can effectively enhance beginners' skills self-efficacy and achievement in precision sports such as pétanque.

  7. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  8. Modeling error in experimental assays using the bootstrap principle: Understanding discrepancies between assays using different dispensing technologies

    PubMed Central

    Hanson, Sonya M.; Ekins, Sean; Chodera, John D.

    2015-01-01

    All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597

  9. Quantifying the Performance of P-Type Transparent Conducting Oxides by Experimental Methods

    PubMed Central

    Fleischer, Karsten; Norton, Emma; Mullarkey, Daragh; Caffrey, David; Shvets, Igor V.

    2017-01-01

    Screening for potential new materials with experimental and theoretical methods has led to the discovery of many promising candidate materials for p-type transparent conducting oxides. It is difficult to reliably assess a good p-type transparent conducting oxide (TCO) from limited information available at an early experimental stage. In this paper we discuss the influence of sample thickness on simple transmission measurements and how the sample thickness can skew the commonly used figure of merit of TCOs and their estimated band gap. We discuss this using copper-deficient CuCrO2 as an example, as it was already shown to be a good p-type TCO grown at low temperatures. We outline a modified figure of merit reducing thickness-dependent errors, as well as how modern ab initio screening methods can be used to augment experimental methods to assess new materials for potential applications as p-type TCOs, p-channel transparent thin film transistors, and selective contacts in solar cells. PMID:28862695

  10. Gamma Spectroscopy by Artificial Neural Network Coupled with MCNP

    NASA Astrophysics Data System (ADS)

    Sahiner, Huseyin

    While neutron activation analysis is widely used in many areas, sensitivity of the analysis depends on how the analysis is conducted. Even though the sensitivity of the techniques carries error, compared to chemical analysis, its range is in parts per million or sometimes billion. Due to this sensitivity, the use of neutron activation analysis becomes important when analyzing bio-samples. Artificial neural network is an attractive technique for complex systems. Although there are neural network applications on spectral analysis, training by simulated data to analyze experimental data has not been made. This study offers an improvement on spectral analysis and optimization on neural network for the purpose. The work considers five elements that are considered as trace elements for bio-samples. However, the system is not limited to five elements. The only limitation of the study comes from data library availability on MCNP. A perceptron network was employed to identify five elements from gamma spectra. In quantitative analysis, better results were obtained when the neural fitting tool in MATLAB was used. As a training function, Levenberg-Marquardt algorithm was used with 23 neurons in the hidden layer with 259 gamma spectra in the input. Because the interest of the study deals with five elements, five neurons representing peak counts of five isotopes in the input layer were used. Five output neurons revealed mass information of these elements from irradiated kidney stones. Results showing max error of 17.9% in APA, 24.9% in UA, 28.2% in COM, 27.9% in STRU type showed the success of neural network approach in analyzing gamma spectra. This high error was attributed to Zn that has a very long decay half-life compared to the other elements. The simulation and experiments were made under certain experimental setup (3 hours irradiation, 96 hours decay time, 8 hours counting time). Nevertheless, the approach is subject to be generalized for different setups.

  11. Local neutral networks help maintain inaccurately replicating ribozymes.

    PubMed

    Szilágyi, András; Kun, Ádám; Szathmáry, Eörs

    2014-01-01

    The error threshold of replication limits the selectively maintainable genome size against recurrent deleterious mutations for most fitness landscapes. In the context of RNA replication a distinction between the genotypic and the phenotypic error threshold has been made; where the latter concerns the maintenance of secondary structure rather than sequence. RNA secondary structure is treated as a proxy for function. The phenotypic error threshold allows higher per digit mutation rates than its genotypic counterpart, and is known to increase with the frequency of neutral mutations in sequence space. Here we show that the degree of neutrality, i.e. the frequency of nearest-neighbour (one-step) neutral mutants is a remarkably accurate proxy for the overall frequency of such mutants in an experimentally verifiable formula for the phenotypic error threshold; this we achieve by the full numerical solution for the concentration of all sequences in mutation-selection balance up to length 16. We reinforce our previous result that currently known ribozymes could be selectively maintained by the accuracy known from the best available polymerase ribozymes. Furthermore, we show that in silico stabilizing selection can increase the mutational robustness of ribozymes due to the fact that they were produced by artificial directional selection in the first place. Our finding offers a better understanding of the error threshold and provides further insight into the plausibility of an ancient RNA world.

  12. Improved Quality in Aerospace Testing Through the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.

  13. Air-water partition coefficients for a suite of polycyclic aromatic and other C10 through C20 unsaturated hydrocarbons.

    PubMed

    Rayne, Sierra; Forest, Kaya

    2016-09-18

    The air-water partition coefficients (Kaw) for 86 large polycyclic aromatic hydrocarbons and their unsaturated relatives were estimated using high-level G4(MP2) gas and aqueous phase calculations with the SMD, IEFPCM-UFF, and CPCM solvation models. An extensive method validation effort was undertaken which involved confirming that, via comparisons to experimental enthalpies of formation, gas-phase energies at the G4(MP2) level for the compounds of interest were at or near thermochemical accuracy. Investigations of the three solvation models using a range of neutral and ionic compounds suggested that while no clear preferential solvation model could be chosen in advance for accurate Kaw estimates of the target compounds, the employment of increasingly higher levels of theory would result in lower Kaw errors. Subsequent calculations on the polycyclic aromatic and unsaturated hydrocarbons at the G4(MP2) level revealed excellent agreement for the IEFPCM-UFF and CPCM models against limited available experimental data. The IEFPCM-UFF-G4(MP2) and CPCM-G4(MP2) solvation energy calculation approaches are anticipated to give Kaw estimates within typical experimental ranges, each having general Kaw errors of less than 0.5 log10 units. When applied to other large organic compounds, the method should allow development of a broad and reliable Kaw database for multimedia environmental modeling efforts on various contaminants.

  14. A low-cost, computer-controlled robotic flower system for behavioral experiments.

    PubMed

    Kuusela, Erno; Lämsä, Juho

    2016-04-01

    Human observations during behavioral studies are expensive, time-consuming, and error prone. For this reason, automatization of experiments is highly desirable, as it reduces the risk of human errors and workload. The robotic system we developed is simple and cheap to build and handles feeding and data collection automatically. The system was built using mostly off-the-shelf components and has a novel feeding mechanism that uses servos to perform refill operations. We used the robotic system in two separate behavioral studies with bumblebees (Bombus terrestris): The system was used both for training of the bees and for the experimental data collection. The robotic system was reliable, with no flight in our studies failing due to a technical malfunction. The data recorded were easy to apply for further analysis. The software and the hardware design are open source. The development of cheap open-source prototyping platforms during the recent years has opened up many possibilities in designing of experiments. Automatization not only reduces workload, but also potentially allows experimental designs never done before, such as dynamic experiments, where the system responds to, for example, learning of the animal. We present a complete system with hardware and software, and it can be used as such in various experiments requiring feeders and collection of visitation data. Use of the system is not limited to any particular experimental setup or even species.

  15. Liquid Medication Dosing Errors by Hispanic Parents: Role of Health Literacy and English Proficiency

    PubMed Central

    Harris, Leslie M.; Dreyer, Benard; Mendelsohn, Alan; Bailey, Stacy C.; Sanders, Lee M.; Wolf, Michael S.; Parker, Ruth M.; Patel, Deesha A.; Kim, Kwang Youn A.; Jimenez, Jessica J.; Jacobson, Kara; Smith, Michelle; Yin, H. Shonna

    2016-01-01

    Objective Hispanic parents in the US are disproportionately affected by low health literacy and limited English proficiency (LEP). We examined associations between health literacy, LEP, and liquid medication dosing errors in Hispanic parents. Methods Cross-sectional analysis of data from a multisite randomized controlled experiment to identify best practices for the labeling/dosing of pediatric liquid medications (SAFE Rx for Kids study); 3 urban pediatric clinics. Analyses were limited to Hispanic parents of children <8 years, with health literacy and LEP data (n=1126). Parents were randomized to 5 groups that varied by pairing of units of measurement on the label/dosing tool. Each parent measured 9 doses [3 amounts (2.5,5,7.5 mL) using 3 tools (2 syringes (0.2,0.5 mL increment), 1 cup)] in random order. Dependent variable: Dosing error=>20% dose deviation. Predictor variables: health literacy (Newest Vital Sign) [limited=0–3; adequate=4–6], LEP (speaks English less than “very well”). Results 83.1% made dosing errors (mean(SD) errors/parent=2.2(1.9)). Parents with limited health literacy and LEP had the greatest odds of making a dosing error compared to parents with adequate health literacy who were English proficient (% trials with errors/parent=28.8 vs. 12.9%; AOR=2.2[1.7–2.8]). Parents with limited health literacy who were English proficient were also more likely to make errors (% trials with errors/parent=18.8%; AOR=1.4[1.1–1.9]). Conclusion Dosing errors are common among Hispanic parents; those with both LEP and limited health literacy are at particular risk. Further study is needed to examine how the redesign of medication labels and dosing tools could reduce literacy and language-associated disparities in dosing errors. PMID:28477800

  16. A diffusion-limited reaction model for self-propagating Al/Pt multilayers with quench limits

    NASA Astrophysics Data System (ADS)

    Kittell, D. E.; Yarrington, C. D.; Hobbs, M. L.; Abere, M. J.; Adams, D. P.

    2018-04-01

    A diffusion-limited reaction model was calibrated for Al/Pt multilayers ignited on oxidized silicon, sapphire, and tungsten substrates, as well as for some Al/Pt multilayers ignited as free-standing foils. The model was implemented in a finite element analysis code and used to match experimental burn front velocity data collected from several years of testing at Sandia National Laboratories. Moreover, both the simulations and experiments reveal well-defined quench limits in the total Al + Pt layer (i.e., bilayer) thickness. At these limits, the heat generated from atomic diffusion is insufficient to support a self-propagating wave front on top of the substrates. Quench limits for reactive multilayers are seldom reported and are found to depend on the thermal properties of the individual layers. Here, the diffusion-limited reaction model is generalized to allow for temperature- and composition-dependent material properties, phase change, and anisotropic thermal conductivity. Utilizing this increase in model fidelity, excellent overall agreement is shown between the simulations and experimental results with a single calibrated parameter set. However, the burn front velocities of Al/Pt multilayers ignited on tungsten substrates are over-predicted. Possible sources of error are discussed and a higher activation energy (from 41.9 kJ/mol.at. to 47.5 kJ/mol.at.) is shown to bring the simulations into agreement with the velocity data observed on tungsten substrates. This higher activation energy suggests an inhibited diffusion mechanism present at lower heating rates.

  17. A Nonlinear Calibration Algorithm Based on Harmonic Decomposition for Two-Axis Fluxgate Sensors

    PubMed Central

    Liu, Shibin

    2018-01-01

    Nonlinearity is a prominent limitation to the calibration performance for two-axis fluxgate sensors. In this paper, a novel nonlinear calibration algorithm taking into account the nonlinearity of errors is proposed. In order to establish the nonlinear calibration model, the combined effort of all time-invariant errors is analyzed in detail, and then harmonic decomposition method is utilized to estimate the compensation coefficients. Meanwhile, the proposed nonlinear calibration algorithm is validated and compared with a classical calibration algorithm by experiments. The experimental results show that, after the nonlinear calibration, the maximum deviation of magnetic field magnitude is decreased from 1302 nT to 30 nT, which is smaller than 81 nT after the classical calibration. Furthermore, for the two-axis fluxgate sensor used as magnetic compass, the maximum error of heading is corrected from 1.86° to 0.07°, which is approximately 11% in contrast with 0.62° after the classical calibration. The results suggest an effective way to improve the calibration performance of two-axis fluxgate sensors. PMID:29789448

  18. Resistive wall mode feedback control in EXTRAP T2R with improved steady-state error and transient response

    NASA Astrophysics Data System (ADS)

    Brunsell, P. R.; Olofsson, K. E. J.; Frassinetti, L.; Drake, J. R.

    2007-10-01

    Experiments in the EXTRAP T2R reversed field pinch [P. R. Brunsell, H. Bergsåker, M. Cecconello et al., Plasma Phys. Control. Fusion 43, 1457 (2001)] on feedback control of m =1 resistive wall modes (RWMs) are compared with simulations using the cylindrical linear magnetohydrodynamic model, including the dynamics of the active coils and power amplifiers. Stabilization of the main RWMs (n=-11,-10,-9,-8,+5,+6) is shown using modest loop gains of the order G ˜1. However, other marginally unstable RWMs (n=-2,-1,+1,+2) driven by external field errors are only partially canceled at these gains. The experimental system stability limit is confirmed by simulations showing that the latency of the digital controller ˜50μs is degrading the system gain margin. The transient response is improved with a proportional-plus-derivative controller, and steady-state error is improved with a proportional-plus-integral controller. Suppression of all modes is obtained at high gain G ˜10 using a proportional-plus-integral-plus-derivative controller.

  19. Determination of |V(us)|| from a lattice QCD calculation of the K → πℓν semileptonic form factor with physical quark masses.

    PubMed

    Bazavov, A; Bernard, C; Bouchard, C M; Detar, C; Du, Daping; El-Khadra, A X; Foley, J; Freeland, E D; Gámiz, E; Gottlieb, Steven; Heller, U M; Kim, Jongjeong; Kronfeld, A S; Laiho, J; Levkova, L; Mackenzie, P B; Neil, E T; Oktay, M B; Qiu, Si-Wei; Simone, J N; Sugar, R; Toussaint, D; Van de Water, R S; Zhou, Ran

    2014-03-21

    We calculate the kaon semileptonic form factor f+(0) from lattice QCD, working, for the first time, at the physical light-quark masses. We use gauge configurations generated by the MILC Collaboration with Nf = 2 + 1 + 1 flavors of sea quarks, which incorporate the effects of dynamical charm quarks as well as those of up, down, and strange. We employ data at three lattice spacings to extrapolate to the continuum limit. Our result, f+(0) = 0.9704(32), where the error is the total statistical plus systematic uncertainty added in quadrature, is the most precise determination to date. Combining our result with the latest experimental measurements of K semileptonic decays, one obtains the Cabibbo-Kobayashi-Maskawa matrix element |V(us)| = 0.22290(74)(52), where the first error is from f+(0) and the second one is from experiment. In the first-row test of Cabibbo-Kobayashi-Maskawa unitarity, the error stemming from |V(us)| is now comparable to that from |V(ud)|.

  20. Optimizing symmetry-based recoupling sequences in solid-state NMR by pulse-transient compensation and asynchronous implementation

    NASA Astrophysics Data System (ADS)

    Hellwagner, Johannes; Sharma, Kshama; Tan, Kong Ooi; Wittmann, Johannes J.; Meier, Beat H.; Madhu, P. K.; Ernst, Matthias

    2017-06-01

    Pulse imperfections like pulse transients and radio-frequency field maladjustment or inhomogeneity are the main sources of performance degradation and limited reproducibility in solid-state nuclear magnetic resonance experiments. We quantitatively analyze the influence of such imperfections on the performance of symmetry-based pulse sequences and describe how they can be compensated. Based on a triple-mode Floquet analysis, we develop a theoretical description of symmetry-based dipolar recoupling sequences, in particular, R2 6411, calculating first- and second-order effective Hamiltonians using real pulse shapes. We discuss the various origins of effective fields, namely, pulse transients, deviation from the ideal flip angle, and fictitious fields, and develop strategies to counteract them for the restoration of full transfer efficiency. We compare experimental applications of transient-compensated pulses and an asynchronous implementation of the sequence to a supercycle, SR26, which is known to be efficient in compensating higher-order error terms. We are able to show the superiority of R26 compared to the supercycle, SR26, given the ability to reduce experimental error on the pulse sequence by pulse-transient compensation and a complete theoretical understanding of the sequence.

  1. Drilling High Precision Holes in Ti6Al4V Using Rotary Ultrasonic Machining and Uncertainties Underlying Cutting Force, Tool Wear, and Production Inaccuracies.

    PubMed

    Chowdhury, M A K; Sharif Ullah, A M M; Anwar, Saqib

    2017-09-12

    Ti6Al4V alloys are difficult-to-cut materials that have extensive applications in the automotive and aerospace industry. A great deal of effort has been made to develop and improve the machining operations of Ti6Al4V alloys. This paper presents an experimental study that systematically analyzes the effects of the machining conditions (ultrasonic power, feed rate, spindle speed, and tool diameter) on the performance parameters (cutting force, tool wear, overcut error, and cylindricity error), while drilling high precision holes on the workpiece made of Ti6Al4V alloys using rotary ultrasonic machining (RUM). Numerical results were obtained by conducting experiments following the design of an experiment procedure. The effects of the machining conditions on each performance parameter have been determined by constructing a set of possibility distributions (i.e., trapezoidal fuzzy numbers) from the experimental data. A possibility distribution is a probability-distribution-neural representation of uncertainty, and is effective in quantifying the uncertainty underlying physical quantities when there is a limited number of data points which is the case here. Lastly, the optimal machining conditions have been identified using these possibility distributions.

  2. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  3. Possibility of measuring Adler angles in charged current single pion neutrino-nucleus interactions

    NASA Astrophysics Data System (ADS)

    Sánchez, F.

    2016-05-01

    Uncertainties in modeling neutrino-nucleus interactions are a major contribution to systematic errors in long-baseline neutrino oscillation experiments. Accurate modeling of neutrino interactions requires additional experimental observables such as the Adler angles which carry information about the polarization of the Δ resonance and the interference with nonresonant single pion production. The Adler angles were measured with limited statistics in bubble chamber neutrino experiments as well as in electron-proton scattering experiments. We discuss the viability of measuring these angles in neutrino interactions with nuclei.

  4. Experimental determination of the navigation error of the 4-D navigation, guidance, and control systems on the NASA B-737 airplane

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1978-01-01

    Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.

  5. Testing the Recognition and Perception of Errors in Context

    ERIC Educational Resources Information Center

    Brandenburg, Laura C.

    2015-01-01

    This study tests the recognition of errors in context and whether the presence of errors affects the reader's perception of the writer's ethos. In an experimental, posttest only design, participants were randomly assigned a memo to read in an online survey: one version with errors and one version without. Of the six intentional errors in version…

  6. The effectiveness of risk management program on pediatric nurses' medication error.

    PubMed

    Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat

    2013-09-01

    Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P < 0.001) and the error-reporting rate was higher (P < 0.007) compared to before the intervention and also in comparison to the nurses of the control hospital. Based on the results of this study and taking into account the high-risk nature of the medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.

  7. In vivo dose verification method in catheter based high dose rate brachytherapy.

    PubMed

    Jaselskė, Evelina; Adlienė, Diana; Rudžianskas, Viktoras; Urbonavičius, Benas Gabrielis; Inčiūra, Arturas

    2017-12-01

    In vivo dosimetry is a powerful tool for dose verification in radiotherapy. Its application in high dose rate (HDR) brachytherapy is usually limited to the estimation of gross errors, due to inability of the dosimetry system/ method to record non-uniform dose distribution in steep dose gradient fields close to the radioactive source. In vivo dose verification in interstitial catheter based HDR brachytherapy is crucial since the treatment is performed inserting radioactive source at the certain positions within the catheters that are pre-implanted into the tumour. We propose in vivo dose verification method for this type of brachytherapy treatment which is based on the comparison between experimentally measured and theoretical dose values calculated at well-defined locations corresponding dosemeter positions in the catheter. Dose measurements were performed using TLD 100-H rods (6 mm long, 1 mm diameter) inserted in a certain sequences into additionally pre-implanted dosimetry catheter. The adjustment of dosemeter positioning in the catheter was performed using reconstructed CT scans of patient with pre-implanted catheters. Doses to three Head&Neck and one Breast cancer patient have been measured during several randomly selected treatment fractions. It was found that the average experimental dose error varied from 4.02% to 12.93% during independent in vivo dosimetry control measurements for selected Head&Neck cancer patients and from 7.17% to 8.63% - for Breast cancer patient. Average experimental dose error was below the AAPM recommended margin of 20% and did not exceed the measurement uncertainty of 17.87% estimated for this type of dosemeters. Tendency of slightly increasing average dose error was observed in every following treatment fraction of the same patient. It was linked to the changes of theoretically estimated dosemeter positions due to the possible patient's organ movement between different treatment fractions, since catheter reconstruction was performed for the first treatment fraction only. These findings indicate potential for further average dose error reduction in catheter based brachytherapy by at least 2-3% in the case that catheter locations will be adjusted before each following treatment fraction, however it requires more detailed investigation. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Zeta potential of microfluidic substrates: 1. Theory, experimental techniques, and effects on separations.

    PubMed

    Kirby, Brian J; Hasselbrink, Ernest F

    2004-01-01

    This paper summarizes theory, experimental techniques, and the reported data pertaining to the zeta potential of silica and silicon with attention to use as microfluidic substrate materials, particularly for microchip chemical separations. Dependence on cation concentration, buffer and cation type, pH, cation valency, and temperature are discussed. The Debye-Hückel limit, which is often correctly treated as a good approximation for describing the ion concentration in the double layer, can lead to serious errors if it is extended to predict the dependence of zeta potential on the counterion concentration. For indifferent univalent electrolytes (e.g., sodium and potassium), two simple scalings for the dependence of zeta potential on counterion concentration can be derived in high- and low-zeta limits of the nonlinear Poisson-Boltzman equation solution in the double layer. It is shown that for most situations relevant to microchip separations, the high-zeta limit is most applicable, leading to the conclusion that the zeta potential on silica substrates is approximately proportional to the logarithm of the molar counterion concentration. The zeta vs. pH dependence measurements from several experiments are compared by normalizing the zeta based on concentration.

  9. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  10. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    NASA Astrophysics Data System (ADS)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  11. Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs

    NASA Astrophysics Data System (ADS)

    Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.

    1982-12-01

    Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.

  12. Short-range optical air data measurements for aircraft control using rotational Raman backscatter.

    PubMed

    Fraczek, Michael; Behrendt, Andreas; Schmitt, Nikolaus

    2013-07-15

    A first laboratory prototype of a novel concept for a short-range optical air data system for aircraft control and safety was built. The measurement methodology was introduced in [Appl. Opt. 51, 148 (2012)] and is based on techniques known from lidar detecting elastic and Raman backscatter from air. A wide range of flight-critical parameters, such as air temperature, molecular number density and pressure can be measured as well as data on atmospheric particles and humidity can be collected. In this paper, the experimental measurement performance achieved with the first laboratory prototype using 532 nm laser radiation of a pulse energy of 118 mJ is presented. Systematic measurement errors and statistical measurement uncertainties are quantified separately. The typical systematic temperature, density and pressure measurement errors obtained from the mean of 1000 averaged signal pulses are small amounting to < 0.22 K, < 0.36% and < 0.31%, respectively, for measurements at air pressures varying from 200 hPa to 950 hPa but constant air temperature of 298.95 K. The systematic measurement errors at air temperatures varying from 238 K to 308 K but constant air pressure of 946 hPa are even smaller and < 0.05 K, < 0.07% and < 0.06%, respectively. A focus is put on the system performance at different virtual flight altitudes as a function of the laser pulse energy. The virtual flight altitudes are precisely generated with a custom-made atmospheric simulation chamber system. In this context, minimum laser pulse energies and pulse numbers are experimentally determined, which are required using the measurement system, in order to meet measurement error demands for temperature and pressure specified in aviation standards. The aviation error margins limit the allowable temperature errors to 1.5 K for all measurement altitudes and the pressure errors to 0.1% for 0 m and 0.5% for 13000 m. With regard to 100-pulse-averaged temperature measurements, the pulse energy using 532 nm laser radiation has to be larger than 11 mJ (35 mJ), regarding 1-σ (3-σ) uncertainties at all measurement altitudes. For 100-pulse-averaged pressure measurements, the laser pulse energy has to be larger than 95 mJ (355 mJ), respectively. Based on these experimental results, the laser pulse energy requirements are extrapolated to the ultraviolet wavelength region as well, resulting in significantly lower pulse energy demand of 1.5 - 3 mJ (4-10 mJ) and 12-27 mJ (45-110 mJ) for 1-σ (3-σ) 100-pulse-averaged temperature and pressure measurements, respectively.

  13. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera.

    PubMed

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G; Nagarkar, Vivek V

    2011-06-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional "straight-cut" (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.

  14. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera

    PubMed Central

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.

    2011-01-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors. PMID:21731108

  15. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera

    NASA Astrophysics Data System (ADS)

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.

    2011-06-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99 m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.

  16. Influence of erroneous patient records on population pharmacokinetic modeling and individual bayesian estimation.

    PubMed

    van der Meer, Aize Franciscus; Touw, Daniël J; Marcus, Marco A E; Neef, Cornelis; Proost, Johannes H

    2012-10-01

    Observational data sets can be used for population pharmacokinetic (PK) modeling. However, these data sets are generally less precisely recorded than experimental data sets. This article aims to investigate the influence of erroneous records on population PK modeling and individual maximum a posteriori Bayesian (MAPB) estimation. A total of 1123 patient records of neonates who were administered vancomycin were used for population PK modeling by iterative 2-stage Bayesian (ITSB) analysis. Cut-off values for weighted residuals were tested for exclusion of records from the analysis. A simulation study was performed to assess the influence of erroneous records on population modeling and individual MAPB estimation. Also the cut-off values for weighted residuals were tested in the simulation study. Errors in registration have limited the influence on outcomes of population PK modeling but can have detrimental effects on individual MAPB estimation. A population PK model created from a data set with many registration errors has little influence on subsequent MAPB estimates for precisely recorded data. A weighted residual value of 2 for concentration measurements has good discriminative power for identification of erroneous records. ITSB analysis and its individual estimates are hardly affected by most registration errors. Large registration errors can be detected by weighted residuals of concentration.

  17. Development of a decentralized multi-axis synchronous control approach for real-time networks.

    PubMed

    Xu, Xiong; Gu, Guo-Ying; Xiong, Zhenhua; Sheng, Xinjun; Zhu, Xiangyang

    2017-05-01

    The message scheduling and the network-induced delays of real-time networks, together with the different inertias and disturbances in different axes, make the synchronous control of the real-time network-based systems quite challenging. To address this challenge, a decentralized multi-axis synchronous control approach is developed in this paper. Due to the limitations of message scheduling and network bandwidth, error of the position synchronization is firstly defined in the proposed control approach as a subset of preceding-axis pairs. Then, a motion message estimator is designed to reduce the effect of network delays. It is proven that position and synchronization errors asymptotically converge to zero in the proposed controller with the delay compensation. Finally, simulation and experimental results show that the developed control approach can achieve the good position synchronization performance for the multi-axis motion over the real-time network. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A hybrid demodulation method of fiber-optic Fabry-Perot pressure sensor

    NASA Astrophysics Data System (ADS)

    Yu, Le; Lang, Jianjun; Pan, Yong; Wu, Di; Zhang, Min

    2013-12-01

    The fiber-optic Fabry-Perot pressure sensors have been widely applied to measure pressure in oilfield. For multi-well it will take a long time (dozens of seconds) to demodulate downhole pressure values of all wells by using only one demodulation system and it will cost a lot when every well is equipped with one system, which heavily limits the sensor applied in oilfield. In present paper, a new hybrid demodulation method, combining the windowed nonequispaced discrete Fourier Transform (nDFT) method with segment search minimum mean square error estimation (MMSE) method, was developed, by which the demodulation time can be reduced to 200ms, i.e., measuring 10 channels/wells was less than 2s. Besides, experimental results showed the demodulation cavity length of the fiber-optic Fabry-Perot sensor has a maximum error of 0.5 nm and consequently pressure measurement accuracy can reach 0.4% F.S.

  19. High efficiency x-ray nanofocusing by the blazed stacking of binary zone plates

    NASA Astrophysics Data System (ADS)

    Mohacsi, I.; Karvinen, P.; Vartiainen, I.; Diaz, A.; Somogyi, A.; Kewish, C. M.; Mercere, P.; David, C.

    2013-09-01

    The focusing efficiency of binary Fresnel zone plate lenses is fundamentally limited and higher efficiency requires a multi step lens profile. To overcome the manufacturing problems of high resolution and high efficiency multistep zone plates, we investigate the concept of stacking two different binary zone plates in each other's optical near-field. We use a coarse zone plate with π phase shift and a double density fine zone plate with π/2 phase shift to produce an effective 4- step profile. Using a compact experimental setup with piezo actuators for alignment, we demonstrated 47.1% focusing efficiency at 6.5 keV using a pair of 500 μm diameter and 200 nm smallest zone width. Furthermore, we present a spatially resolved characterization method using multiple diffraction orders to identify manufacturing errors, alignment errors and pattern distortions and their effect on diffraction efficiency.

  20. Reliable absolute analog code retrieval approach for 3D measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun

    2017-11-01

    The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.

  1. Some Simultaneous Inference Procedures for A Priori Contrasts.

    ERIC Educational Resources Information Center

    Convey, John J.

    The testing of a priori contrasts, post hoc contrasts, and experimental error rates are discussed. Methods for controlling the experimental error rate for a set of a priori contrasts tested simultaneously have been developed by Dunnett, Dunn, Sidak, and Krishnaiah. Each of these methods is discussed and contrasted as to applicability, power, and…

  2. Psychometrics and the neuroscience of individual differences: Internal consistency limits between-subjects effects.

    PubMed

    Hajcak, Greg; Meyer, Alexandria; Kotov, Roman

    2017-08-01

    In the clinical neuroscience literature, between-subjects differences in neural activity are presumed to reflect reliable measures-even though the psychometric properties of neural measures are almost never reported. The current article focuses on the critical importance of assessing and reporting internal consistency reliability-the homogeneity of "items" that comprise a neural "score." We demonstrate how variability in the internal consistency of neural measures limits between-subjects (i.e., individual differences) effects. To this end, we utilize error-related brain activity (i.e., the error-related negativity or ERN) in both healthy and generalized anxiety disorder (GAD) participants to demonstrate options for psychometric analyses of neural measures; we examine between-groups differences in internal consistency, between-groups effect sizes, and between-groups discriminability (i.e., ROC analyses)-all as a function of increasing items (i.e., number of trials). Overall, internal consistency should be used to inform experimental design and the choice of neural measures in individual differences research. The internal consistency of neural measures is necessary for interpreting results and guiding progress in clinical neuroscience-and should be routinely reported in all individual differences studies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Long-range wind monitoring in real time with optimized coherent lidar

    NASA Astrophysics Data System (ADS)

    Dolfi-Bouteyre, Agnes; Canat, Guillaume; Lombard, Laurent; Valla, Matthieu; Durécu, Anne; Besson, Claudine

    2017-03-01

    Two important enabling technologies for pulsed coherent detection wind lidar are the laser and real-time signal processing. In particular, fiber laser is limited in peak power by nonlinear effects, such as stimulated Brillouin scattering (SBS). We report on various technologies that have been developed to mitigate SBS and increase peak power in 1.5-μm fiber lasers, such as special large mode area fiber designs or strain management. Range-resolved wind profiles up to a record range of 16 km within 0.1-s averaging time have been obtained thanks to those high-peak power fiber lasers. At long range, the lidar signal gets much weaker than the noise and special care is required to extract the Doppler peak from the spectral noise. To optimize real-time processing for weak carrier-to-noise ratio signal, we have studied various Doppler mean frequency estimators (MFE) and the influence of data accumulation on outliers occurrence. Five real-time MFEs (maximum, centroid, matched filter, maximum likelihood, and polynomial fit) have been compared in terms of error and processing time using lidar experimental data. MFE errors and data accumulation limits are established using a spectral method.

  4. Making electronic prescribing alerts more effective: scenario-based experimental study in junior doctors

    PubMed Central

    Shah, Priya; Wyatt, Jeremy C; Makubate, Boikanyo; Cross, Frank W

    2011-01-01

    Objective Expert authorities recommend clinical decision support systems to reduce prescribing error rates, yet large numbers of insignificant on-screen alerts presented in modal dialog boxes persistently interrupt clinicians, limiting the effectiveness of these systems. This study compared the impact of modal and non-modal electronic (e-) prescribing alerts on prescribing error rates, to help inform the design of clinical decision support systems. Design A randomized study of 24 junior doctors each performing 30 simulated prescribing tasks in random order with a prototype e-prescribing system. Using a within-participant design, doctors were randomized to be shown one of three types of e-prescribing alert (modal, non-modal, no alert) during each prescribing task. Measurements The main outcome measure was prescribing error rate. Structured interviews were performed to elicit participants' preferences for the prescribing alerts and their views on clinical decision support systems. Results Participants exposed to modal alerts were 11.6 times less likely to make a prescribing error than those not shown an alert (OR 11.56, 95% CI 6.00 to 22.26). Those shown a non-modal alert were 3.2 times less likely to make a prescribing error (OR 3.18, 95% CI 1.91 to 5.30) than those not shown an alert. The error rate with non-modal alerts was 3.6 times higher than with modal alerts (95% CI 1.88 to 7.04). Conclusions Both kinds of e-prescribing alerts significantly reduced prescribing error rates, but modal alerts were over three times more effective than non-modal alerts. This study provides new evidence about the relative effects of modal and non-modal alerts on prescribing outcomes. PMID:21836158

  5. A theoretical perspective on the accuracy of rotational resonance (R 2)-based distance measurements in solid-state NMR

    NASA Astrophysics Data System (ADS)

    Pandey, Manoj Kumar; Ramachandran, Ramesh

    2010-03-01

    The application of solid-state NMR methodology for bio-molecular structure determination requires the measurement of constraints in the form of 13C-13C and 13C-15N distances, torsion angles and, in some cases, correlation of the anisotropic interactions. Since the availability of structurally important constraints in the solid state is limited due to lack of sufficient spectral resolution, the accuracy of the measured constraints become vital in studies relating the three-dimensional structure of proteins to its biological functions. Consequently, the theoretical methods employed to quantify the experimental data become important. To accentuate this aspect, we re-examine analytical two-spin models currently employed in the estimation of 13C-13C distances based on the rotational resonance (R 2) phenomenon. Although the error bars for the estimated distances tend to be in the range 0.5-1.0 Å, R 2 experiments are routinely employed in a variety of systems ranging from simple peptides to more complex amyloidogenic proteins. In this article we address this aspect by highlighting the systematic errors introduced by analytical models employing phenomenological damping terms to describe multi-spin effects. Specifically, the spin dynamics in R 2 experiments is described using Floquet theory employing two different operator formalisms. The systematic errors introduced by the phenomenological damping terms and their limitations are elucidated in two analytical models and analysed by comparing the results with rigorous numerical simulations.

  6. Monitoring Building Deformation with InSAR: Experiments and Validation.

    PubMed

    Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng

    2016-12-20

    Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated.

  7. A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.

    PubMed

    Kim, Joo H; Roberts, Dustyn

    2015-09-01

    Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  9. Mimicking aphasic semantic errors in normal speech production: evidence from a novel experimental paradigm.

    PubMed

    Hodgson, Catherine; Lambon Ralph, Matthew A

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study utilised a novel method- tempo picture naming. Experiment 1 showed that, compared to standard deadline naming tasks, participants made more errors on the tempo picture naming tasks. Further, RTs were longer and more errors were produced to living items than non-living items a pattern seen in both semantic dementia and semantically-impaired stroke aphasic patients. Experiment 2 showed that providing the initial phoneme as a cue enhanced performance whereas providing an incorrect phonemic cue further reduced performance. These results support the contention that the tempo picture naming paradigm reduces the time allowed for controlled semantic processing causing increased error rates. This experimental procedure would, therefore, appear to mimic the performance of aphasic patients with multi-modal semantic impairment that results from poor semantic control rather than the degradation of semantic representations observed in semantic dementia [Jefferies, E. A., & Lambon Ralph, M. A. (2006). Semantic impairment in stoke aphasia vs. semantic dementia: A case-series comparison. Brain, 129, 2132-2147]. Further implications for theories of semantic cognition and models of speech processing are discussed.

  10. Identification and compensation of the temperature influences in a miniature three-axial accelerometer based on the least squares method

    NASA Astrophysics Data System (ADS)

    Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae

    2017-06-01

    The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.

  11. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    PubMed

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.

  12. Experimental Investigation of Jet Impingement Heat Transfer Using Thermochromic Liquid Crystals

    NASA Technical Reports Server (NTRS)

    Dempsey, Brian Paul

    1997-01-01

    Jet impingement cooling of a hypersonic airfoil leading edge is experimentally investigated using thermochromic liquid crystals (TLCS) to measure surface temperature. The experiment uses computer data acquisition with digital imaging of the TLCs to determine heat transfer coefficients during a transient experiment. The data reduction relies on analysis of a coupled transient conduction - convection heat transfer problem that characterizes the experiment. The recovery temperature of the jet is accounted for by running two experiments with different heating rates, thereby generating a second equation that is used to solve for the recovery temperature. The resulting solution requires a complicated numerical iteration that is handled by a computer. Because the computational data reduction method is complex, special attention is paid to error assessment. The error analysis considers random and systematic errors generated by the instrumentation along with errors generated by the approximate nature of the numerical methods. Results of the error analysis show that the experimentally determined heat transfer coefficients are accurate to within 15%. The error analysis also shows that the recovery temperature data may be in error by more than 50%. The results show that the recovery temperature data is only reliable when the recovery temperature of the jet is greater than 5 C, i.e. the jet velocity is in excess of 100 m/s. Parameters that were investigated include nozzle width, distance from the nozzle exit to the airfoil surface, and jet velocity. Heat transfer data is presented in graphical and tabular forms. An engineering analysis of hypersonic airfoil leading edge cooling is performed using the results from these experiments. Several suggestions for the improvement of the experimental technique are discussed.

  13. A study of the local pressure field in turbulent shear flow and its relation to aerodynamic noise generation

    NASA Technical Reports Server (NTRS)

    Jones, B. G.; Planchon, H. P., Jr.

    1973-01-01

    Work during the period of this report has been in three areas: (1) pressure transducer error analysis, (2) fluctuating velocity and pressure measurements in the NASA Lewis 6-inch diameter quiet jet facility, and (3) measurement analysis. A theory was developed and experimentally verified to quantify the pressure transducer velocity interference error. The theory and supporting experimental evidence show that the errors are a function of the velocity field's turbulent structure. It is shown that near the mixing layer center the errors are negligible. Turbulent velocity and pressure measurements were made in the NASA Lewis quiet jet facility. Some preliminary results are included.

  14. Implementation of an experimental fault-tolerant memory system

    NASA Technical Reports Server (NTRS)

    Carter, W. C.; Mccarthy, C. E.

    1976-01-01

    The experimental fault-tolerant memory system described in this paper has been designed to enable the modular addition of spares, to validate the theoretical fault-secure and self-testing properties of the translator/corrector, to provide a basis for experiments using the new testing and correction processes for recovery, and to determine the practicality of such systems. The hardware design and implementation are described, together with methods of fault insertion. The hardware/software interface, including a restricted single error correction/double error detection (SEC/DED) code, is specified. Procedures are carefully described which, (1) test for specified physical faults, (2) ensure that single error corrections are not miscorrections due to triple faults, and (3) enable recovery from double errors.

  15. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less

  16. Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits.

    PubMed

    Gámez Serna, Citlalli; Ruichek, Yassine

    2017-06-14

    A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle's speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, 'curve analysis extraction' and 'speed limits database creation' are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger.

  17. Minimizing Experimental Error in Thinning Research

    Treesearch

    C. B. Briscoe

    1964-01-01

    Many diverse approaches have been made prescribing and evaluating thinnings on an objective basis. None of the techniques proposed hasbeen widely accepted. Indeed. none has been proven superior to the others nor even widely applicable. There are at least two possible reasons for this: none of the techniques suggested is of any general utility and/or experimental error...

  18. The cost of misremembering: Inferring the loss function in visual working memory.

    PubMed

    Sims, Chris R

    2015-03-04

    Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. © 2015 ARVO.

  19. Deterministic ion beam material adding technology for high-precision optical surfaces.

    PubMed

    Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin

    2013-02-20

    Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.

  20. A diffusion-limited reaction model for self-propagating Al/Pt multilayers with quench limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kittell, David E.; Yarrington, Cole D.; Hobbs, M. L.

    A diffusion-limited reaction model was calibrated for Al/Pt multilayers ignited on oxidized silicon, sapphire, and tungsten substrates, as well as for some Al/Pt multilayers ignited as free-standing foils. The model was implemented in a finite element analysis code and used to match experimental burn front velocity data collected from several years of testing at Sandia National Laboratories. Moreover, both the simulations and experiments reveal well-defined quench limits in the total Al + Pt layer (i.e., bilayer) thickness. At these limits, the heat generated from atomic diffusion is insufficient to support a self-propagating wave front on top of the substrates. Quenchmore » limits for reactive multilayers are seldom reported and are found to depend on the thermal properties of the individual layers. Here, the diffusion-limited reaction model is generalized to allow for temperature- and composition-dependent material properties, phase change, and anisotropic thermal conductivity. Utilizing this increase in model fidelity, excellent overall agreement is shown between the simulations and experimental results with a single calibrated parameter set. However, the burn front velocities of Al/Pt multilayers ignited on tungsten substrates are over-predicted. Finally, possible sources of error are discussed and a higher activation energy (from 41.9 kJ/mol.at. to 47.5 kJ/mol.at.) is shown to bring the simulations into agreement with the velocity data observed on tungsten substrates. Finally, this higher activation energy suggests an inhibited diffusion mechanism present at lower heating rates.« less

  1. A diffusion-limited reaction model for self-propagating Al/Pt multilayers with quench limits

    DOE PAGES

    Kittell, David E.; Yarrington, Cole D.; Hobbs, M. L.; ...

    2018-04-14

    A diffusion-limited reaction model was calibrated for Al/Pt multilayers ignited on oxidized silicon, sapphire, and tungsten substrates, as well as for some Al/Pt multilayers ignited as free-standing foils. The model was implemented in a finite element analysis code and used to match experimental burn front velocity data collected from several years of testing at Sandia National Laboratories. Moreover, both the simulations and experiments reveal well-defined quench limits in the total Al + Pt layer (i.e., bilayer) thickness. At these limits, the heat generated from atomic diffusion is insufficient to support a self-propagating wave front on top of the substrates. Quenchmore » limits for reactive multilayers are seldom reported and are found to depend on the thermal properties of the individual layers. Here, the diffusion-limited reaction model is generalized to allow for temperature- and composition-dependent material properties, phase change, and anisotropic thermal conductivity. Utilizing this increase in model fidelity, excellent overall agreement is shown between the simulations and experimental results with a single calibrated parameter set. However, the burn front velocities of Al/Pt multilayers ignited on tungsten substrates are over-predicted. Finally, possible sources of error are discussed and a higher activation energy (from 41.9 kJ/mol.at. to 47.5 kJ/mol.at.) is shown to bring the simulations into agreement with the velocity data observed on tungsten substrates. Finally, this higher activation energy suggests an inhibited diffusion mechanism present at lower heating rates.« less

  2. Exploring Reactions to Pilot Reliability Certification and Changing Attitudes on the Reduction of Errors

    ERIC Educational Resources Information Center

    Boedigheimer, Dan

    2010-01-01

    Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…

  3. Achieving diffraction-limited nanometer-scale X-ray point focus with two crossed multilayer Laue lenses: alignment challenges

    DOE PAGES

    Yan, Hanfei; Huang, Xiaojing; Bouet, Nathalie; ...

    2017-10-16

    In this article, we discuss misalignment-induced aberrations in a pair of crossed multilayer Laue lenses used for achieving a nanometer-scale x-ray point focus. We thoroughly investigate the impacts of two most important contributions, the orthogonality and the separation distance between two lenses. We find that misalignment in the orthogonality results in astigmatism at 45º and other inclination angles when coupled with a separation distance error. Theoretical explanation and experimental verification are provided. We show that to achieve a diffraction-limited point focus, accurate alignment of the azimuthal angle is required to ensure orthogonality between two lenses, and the required accuracy ismore » scaled with the ratio of the focus size to the aperture size.« less

  4. Structure and Processing in Tunisian Arabic: Speech Error Data

    ERIC Educational Resources Information Center

    Hamrouni, Nadia

    2010-01-01

    This dissertation presents experimental research on speech errors in Tunisian Arabic. The nonconcatenative morphology of Arabic shows interesting interactions of phrasal and lexical constraints with morphological structure during language production. The central empirical questions revolve around properties of "exchange errors". These…

  5. Strong decays of DJ(3000 ) and Ds J(3040 )

    NASA Astrophysics Data System (ADS)

    Li, Si-Chen; Wang, Tianhong; Jiang, Yue; Tan, Xiao-Ze; Li, Qiang; Wang, Guo-Li; Chang, Chao-Hsi

    2018-03-01

    In this paper, we systematically calculate two-body strong decays of newly observed DJ(3000 ) and Ds J(3040 ) with 2 P (1+) and 2 P (1+') assignments in an instantaneous approximation of the Bethe-Salpeter equation method. Our results show that both resonances can be explained as the 2 P (1+') with broad width via 3P1 and 1P1 mixing in D and Ds families. For DJ(3000 ), the total width is 229.6 MeV in our calculation, close to the upper limit of experimental data, and the dominant decay channels are D2*π , D*π , and D*(2600 )π . For Ds J(3040 ), the total width is 157.4 MeV in our calculation, close to the lower limit of experimental data, and the dominant channels are D*K and D*K*. These results are consistent with observed channels in experiments. Given the very little information that has been obtained from experiments and the large error bars of the total decay widths, we recommend the detection of dominant channels in our calculation.

  6. Locality of Area Coverage on Digital Acoustic Communication in Air using Differential Phase Shift Keying

    NASA Astrophysics Data System (ADS)

    Mizutani, Keiichi; Ebihara, Tadashi; Wakatsuki, Naoto; Mizutani, Koichi

    2009-07-01

    We experimentally evaluate the locality of digital acoustic communication in air. Digital acoustic communication in air is suitable for a small cell system, because acoustic waves have a short propagation distance in air. In this study, optimal cell size is experimentally evaluated. Each base station (BS) transmits different commands. In our experiment, differential phase shift keying (DPSK), especially binary DPSK (DBPSK), is adopted as a modulation and demodulation scheme. The evaluated system consists of a personal computer (PC), a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), a loud speaker (SP), a microphone (MIC), and transceiver software. All experiments are performed in an anechoic room. The cell size of the transmitter can be limited under low signal-to-noise ratio (SNR) condition. If another transmitter works, cell size is limited by the effect of the interference from that transmitter. The cell size-to-distance ratio of transmitter A to transmitter B is 37.5%, if cell edge bit-error-rate (BER) is taken as 10-3.

  7. Scoring-and-unfolding trimmed tree assembler: concepts, constructs and comparisons.

    PubMed

    Narzisi, Giuseppe; Mishra, Bud

    2011-01-15

    Mired by its connection to a well-known -complete combinatorial optimization problem-namely, the Shortest Common Superstring Problem (SCSP)-historically, the whole-genome sequence assembly (WGSA) problem has been assumed to be amenable only to greedy and heuristic methods. By placing efficiency as their first priority, these methods opted to rely only on local searches, and are thus inherently approximate, ambiguous or error prone, especially, for genomes with complex structures. Furthermore, since choice of the best heuristics depended critically on the properties of (e.g. errors in) the input data and the available long range information, these approaches hindered designing an error free WGSA pipeline. We dispense with the idea of limiting the solutions to just the approximated ones, and instead favor an approach that could potentially lead to an exhaustive (exponential-time) search of all possible layouts. Its computational complexity thus must be tamed through a constrained search (Branch-and-Bound) and quick identification and pruning of implausible overlays. For his purpose, such a method necessarily relies on a set of score functions (oracles) that can combine different structural properties (e.g. transitivity, coverage, physical maps, etc.). We give a detailed description of this novel assembly framework, referred to as Scoring-and-Unfolding Trimmed Tree Assembler (SUTTA), and present experimental results on several bacterial genomes using next-generation sequencing technology data. We also report experimental evidence that the assembly quality strongly depends on the choice of the minimum overlap parameter k. SUTTA's binaries are freely available to non-profit institutions for research and educational purposes at http://www.bioinformatics.nyu.edu.

  8. Soil respiration patterns in root gaps 27 years after small scale experimental disturbance in Pinus contorta forests

    NASA Astrophysics Data System (ADS)

    Baker, S.; Berryman, E.; Hawbaker, T. J.; Ewers, B. E.

    2015-12-01

    While much attention has been focused on large scale forest disturbances such as fire, harvesting, drought and insect attacks, small scale forest disturbances that create gaps in forest canopies and below ground root and mycorrhizal networks may accumulate to impact regional scale carbon budgets. In a lodgepole pine (Pinus contorta) forest near Fox Park, WY, clusters of 15 and 30 trees were removed in 1988 to assess the effect of tree gap disturbance on fine root density and nitrogen transformation. Twenty seven years later the gaps remain with limited regeneration present only in the center of the 30 tree plots, beyond the influence of roots from adjacent intact trees. Soil respiration was measured in the summer of 2015 to assess the influence of these disturbances on carbon cycling in Pinus contorta forests. Positions at the centers of experimental disturbances were found to have the lowest respiration rates (mean 2.45 μmol C/m2/s, standard error 0.17 C/m2/s), control plots in the undisturbed forest were highest (mean 4.15 μmol C/m2/s, standard error 0.63 C/m2/s), and positions near the margin of the disturbance were intermediate (mean 3.7 μmol C/m2/s, standard error 0.34 C/m2/s). Fine root densities, soil nitrogen, and microclimate changes were also measured and played an important role in respiration rates of disturbed plots. This demonstrates that a long-term effect on carbon cycling occurs when gaps are created in the canopy and root network of lodgepole forests.

  9. Implementation of adiabatic geometric gates with superconducting phase qubits.

    PubMed

    Peng, Z H; Chu, H F; Wang, Z D; Zheng, D N

    2009-01-28

    We present an adiabatic geometric quantum computation strategy based on the non-degenerate energy eigenstates in (but not limited to) superconducting phase qubit systems. The fidelity of the designed quantum gate was evaluated in the presence of simulated thermal fluctuations in a superconducting phase qubits circuit and was found to be quite robust against random errors. In addition, it was elucidated that the Berry phase in the designed adiabatic evolution may be detected directly via the quantum state tomography developed for superconducting qubits. We also analyze the effects of control parameter fluctuations on the experimental detection of the Berry phase.

  10. An evaluation method for nanoscale wrinkle

    NASA Astrophysics Data System (ADS)

    Liu, Y. P.; Wang, C. G.; Zhang, L. M.; Tan, H. F.

    2016-06-01

    In this paper, a spectrum-based wrinkling analysis method via two-dimensional Fourier transformation is proposed aiming to solve the difficulty of nanoscale wrinkle evaluation. It evaluates the wrinkle characteristics including wrinkling wavelength and direction simply using a single wrinkling image. Based on this method, the evaluation results of nanoscale wrinkle characteristics show agreement with the open experimental results within an error of 6%. It is also verified to be appropriate for the macro wrinkle evaluation without scale limitations. The spectrum-based wrinkling analysis is an effective method for nanoscale evaluation, which contributes to reveal the mechanism of nanoscale wrinkling.

  11. Orbital-angular-momentum-multiplexed free-space optical communication link using transmitter lenses.

    PubMed

    Li, Long; Xie, Guodong; Ren, Yongxiong; Ahmed, Nisar; Huang, Hao; Zhao, Zhe; Liao, Peicheng; Lavery, Martin P J; Yan, Yan; Bao, ChangJing; Wang, Zhe; Willner, Asher J; Ashrafi, Nima; Ashrafi, Solyman; Tur, Moshe; Willner, Alan E

    2016-03-10

    In this paper, we explore the potential benefits and limitations of using transmitter lenses in an orbital-angular-momentum (OAM)-multiplexed free-space optical (FSO) communication link. Both simulation and experimental results indicate that within certain transmission distances, using lenses at the transmitter to focus OAM beams could reduce power loss in OAM-based FSO links and that this improvement might be more significant for higher-order OAM beams. Moreover, the use of transmitter lenses could enhance system tolerance to angular error between transmitter and receiver, but they might degrade tolerance to lateral displacement.

  12. Noise from Propellers with Symmetrical Sections at Zero Blade Angle

    NASA Technical Reports Server (NTRS)

    Deming, A F

    1937-01-01

    A theory has been deduced for the "rotation noise" from a propeller with blades of symmetrical section about the chord line and set at zero blade angle. Owing to the limitation of the theory, the equations give without appreciable error only the sound pressure for cases where the wave lengths are large compared with the blade lengths. With the aid of experimental data obtained from a two-blade arrangement, an empirical relation was introduced that permitted calculation of higher harmonics. The generality of the final relation given is indicated by the fundamental and second harmonic of a four-blade arrangement.

  13. Computer Simulations to Study Diffraction Effects of Stacking Faults in Beta-SiC: II. Experimental Verification. 2; Experimental Verification

    NASA Technical Reports Server (NTRS)

    Pujar, Vijay V.; Cawley, James D.; Levine, S. (Technical Monitor)

    2000-01-01

    Earlier results from computer simulation studies suggest a correlation between the spatial distribution of stacking errors in the Beta-SiC structure and features observed in X-ray diffraction patterns of the material. Reported here are experimental results obtained from two types of nominally Beta-SiC specimens, which yield distinct XRD data. These samples were analyzed using high resolution transmission electron microscopy (HRTEM) and the stacking error distribution was directly determined. The HRTEM results compare well to those deduced by matching the XRD data with simulated spectra, confirming the hypothesis that the XRD data is indicative not only of the presence and density of stacking errors, but also that it can yield information regarding their distribution. In addition, the stacking error population in both specimens is related to their synthesis conditions and it appears that it is similar to the relation developed by others to explain the formation of the corresponding polytypes.

  14. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.

  15. Progressive Care Nurses Improving Patient Safety by Limiting Interruptions During Medication Administration.

    PubMed

    Flynn, Fran; Evanish, Julie Q; Fernald, Josephine M; Hutchinson, Dawn E; Lefaiver, Cheryl

    2016-08-01

    Because of the high frequency of interruptions during medication administration, the effectiveness of strategies to limit interruptions during medication administration has been evaluated in numerous quality improvement initiatives in an effort to reduce medication administration errors. To evaluate the effectiveness of evidence-based strategies to limit interruptions during scheduled, peak medication administration times in 3 progressive cardiac care units (PCCUs). A secondary aim of the project was to evaluate the impact of limiting interruptions on medication errors. The percentages of interruptions and medication errors before and after implementation of evidence-based strategies to limit interruptions were measured by using direct observations of nurses on 2 PCCUs. Nurses in a third PCCU served as a comparison group. Interruptions (P < .001) and medication errors (P = .02) decreased significantly in 1 PCCU after implementation of evidence-based strategies to limit interruptions. Avoidable interruptions decreased 83% in PCCU1 and 53% in PCCU2 after implementation of the evidence-based strategies. Implementation of evidence-based strategies to limit interruptions in PCCUs decreases avoidable interruptions and promotes patient safety. ©2016 American Association of Critical-Care Nurses.

  16. Correcting For Seed-Particle Lag In LV Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.

    1994-01-01

    Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.

  17. Measurement of Fracture Aperture Fields Using Ttransmitted Light: An Evaluation of Measurement Errors and their Influence on Simulations of Flow and Transport through a Single Fracture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.

    Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity tomore » error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.« less

  18. Charmed and light pseudoscalar meson decay constants from four-flavor lattice QCD with physical light quarks

    DOE PAGES

    Bazavov, A.; Bernard, C.; Komijani, J.; ...

    2014-10-30

    We compute the leptonic decay constants f D+, f Ds , and f K+, and the quark-mass ratios m c=m s and m s=m l in unquenched lattice QCD using the experimentally determined value of f π+ for normalization. We use the MILC Highly Improved Staggered Quark (HISQ) ensembles with four dynamical quark flavors -- up, down, strange, and charm -- and with both physical and unphysical values of the light sea-quark masses. The use of physical pions removes the need for a chiral extrapolation, thereby eliminating a significant source of uncertainty in previous calculations. Four different lattice spacing ranging from a ≈ 0:06 fm to 0:15 fm are included in the analysis to control the extrapolation to the continuum limit. Our primary results are f D+ = 212:6(0:4)more » $$(^{+1.0}_{-1.2})$$ MeV, f Ds = 249:0(0:3)$$(^{+1.1}_{-1.5})$$ MeV, and f Ds/f D+ = 1:1712(10)$$(^{+29}_{-32})$$, where the errors are statistical and total systematic, respectively. The errors on our results for the charm decay constants and their ratio are approximately two to four times smaller than those of the most precise previous lattice calculations. We also obtain f K+/ f π+ = 1:1956(10)$$(^{+26}_{-18})$$, updating our previous result, and determine the quark-mass ratios m s/m l = 27:35(5)$$(^{+10}_{-7})$$ and m c/m s = 11:747(19)$$(^{+59}_{-43})$$. When combined with experimental measurements of the decay rates, our results lead to precise determinations of the CKM matrix elements !Vus! = 0:22487(51)(29)(20)(5), !Vcd! = 0:217(1)(5)(1) and !Vcs! = 1:010(5)(18)(6), where the errors are from this calculation of the decay constants, the uncertainty in the experimental decay rates, structure-dependent electromagnetic corrections, and, in the case of !Vus!, the uncertainty in |Vud|, respectively.« less

  19. Estimated landmark calibration of biomechanical models for inverse kinematics.

    PubMed

    Trinler, Ursula; Baker, Richard

    2018-01-01

    Inverse kinematics is emerging as the optimal method in movement analysis to fit a multi-segment biomechanical model to experimental marker positions. A key part of this process is calibrating the model to the dimensions of the individual being analysed which requires scaling of the model, pose estimation and localisation of tracking markers within the relevant segment coordinate systems. The aim of this study is to propose a generic technique for this process and test a specific application to the OpenSim model Gait2392. Kinematic data from 10 healthy adult participants were captured in static position and normal walking. Results showed good average static and dynamic fitting errors between virtual and experimental markers of 0.8 cm and 0.9 cm, respectively. Highest fitting errors were found on the epicondyle (static), feet (static, dynamic) and on the thigh (dynamic). These result from inconsistencies between the model geometry and degrees of freedom and the anatomy and movement pattern of the individual participants. A particular limitation is in estimating anatomical landmarks from the bone meshes supplied with Gait2392 which do not conform with the bone morphology of the participants studied. Soft tissue artefact will also affect fitting the model to walking trials. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Bed turbulent kinetic energy boundary conditions for trapping efficiency and spatial distribution of sediments in basins.

    PubMed

    Isenmann, Gilles; Dufresne, Matthieu; Vazquez, José; Mose, Robert

    2017-10-01

    The purpose of this study is to develop and validate a numerical tool for evaluating the performance of a settling basin regarding the trapping of suspended matter. The Euler-Lagrange approach was chosen to model the flow and sediment transport. The numerical model developed relies on the open source library OpenFOAM ® , enhanced with new particle/wall interaction conditions to limit sediment deposition in zones with favourable hydrodynamic conditions (shear stress, turbulent kinetic energy). In particular, a new relation is proposed for calculating the turbulent kinetic energy threshold as a function of the properties of each particle (diameter and density). The numerical model is compared to three experimental datasets taken from the literature and collected for scale models of basins. The comparison of the numerical and experimental results permits concluding on the model's capacity to predict the trapping of particles in a settling basin with an absolute error in the region of 5% when the sediment depositions occur over the entire bed. In the case of sediment depositions localised in preferential zones, their distribution is reproduced well by the model and trapping efficiency is evaluated with an absolute error in the region of 10% (excluding cases of particles with very low density).

  1. Recovery comparisons--hot nitrogen Vs steam regeneration of toxic dichloromethane from activated carbon beds in oil sands process.

    PubMed

    Ramalingam, Shivaji G; Pré, Pascaline; Giraudet, Sylvain; Le Coq, Laurence; Le Cloirec, Pierre; Baudouin, Olivier; Déchelotte, Stéphane

    2012-02-29

    The regeneration experiments of dichloromethane from activated carbon bed had been carried out by both hot nitrogen and steam to evaluate the regeneration performance and the operating cost of the regeneration step. Factorial Experimental Design (FED) tool had been implemented to optimize the temperature of nitrogen and the superficial velocity of the nitrogen to achieve maximum regeneration at an optimized operating cost. All the experimental results of adsorption step, hot nitrogen and steam regeneration step had been validated by the simulation model PROSIM. The average error percentage between the simulation and experiment based on the mass of adsorption of dichloromethane was 2.6%. The average error percentages between the simulations and experiments based on the mass of dichloromethane regenerated by nitrogen regeneration and steam regeneration were 3 and 12%, respectively. From the experiments, it had been shown that both the hot nitrogen and steam regeneration had regenerated 84% of dichloromethane. But the choice of hot nitrogen or steam regeneration depends on the regeneration time, operating costs, and purity of dichloromethane regenerated. A thorough investigation had been made about the advantages and limitations of both the hot nitrogen and steam regeneration of dichloromethane. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Computerized techniques pave the way for drug-drug interaction prediction and interpretation

    PubMed Central

    Safdari, Reza; Ferdousi, Reza; Aziziheris, Kamal; Niakan-Kalhori, Sharareh R.; Omidi, Yadollah

    2016-01-01

    Introduction: Health care industry also patients penalized by medical errors that are inevitable but highly preventable. Vast majority of medical errors are related to adverse drug reactions, while drug-drug interactions (DDIs) are the main cause of adverse drug reactions (ADRs). DDIs and ADRs have mainly been reported by haphazard case studies. Experimental in vivo and in vitro researches also reveals DDI pairs. Laboratory and experimental researches are valuable but also expensive and in some cases researchers may suffer from limitations. Methods: In the current investigation, the latest published works were studied to analyze the trend and pattern of the DDI modelling and the impacts of machine learning methods. Applications of computerized techniques were also investigated for the prediction and interpretation of DDIs. Results: Computerized data-mining in pharmaceutical sciences and related databases provide new key transformative paradigms that can revolutionize the treatment of diseases and hence medical care. Given that various aspects of drug discovery and pharmacotherapy are closely related to the clinical and molecular/biological information, the scientifically sound databases (e.g., DDIs, ADRs) can be of importance for the success of pharmacotherapy modalities. Conclusion: A better understanding of DDIs not only provides a robust means for designing more effective medicines but also grantees patient safety. PMID:27525223

  3. Design and Implementation of an Intrinsically Safe Liquid-Level Sensor Using Coaxial Cable

    PubMed Central

    Jin, Baoquan; Liu, Xin; Bai, Qing; Wang, Dong; Wang, Yu

    2015-01-01

    Real-time detection of liquid level in complex environments has always been a knotty issue. In this paper, an intrinsically safe liquid-level sensor system for flammable and explosive environments is designed and implemented. The poly vinyl chloride (PVC) coaxial cable is chosen as the sensing element and the measuring mechanism is analyzed. Then, the capacitance-to-voltage conversion circuit is designed and the expected output signal is achieved by adopting parameter optimization. Furthermore, the experimental platform of the liquid-level sensor system is constructed, which involves the entire process of measuring, converting, filtering, processing, visualizing and communicating. Additionally, the system is designed with characteristics of intrinsic safety by limiting the energy of the circuit to avoid or restrain the thermal effects and sparks. Finally, the approach of the piecewise linearization is adopted in order to improve the measuring accuracy by matching the appropriate calibration points. The test results demonstrate that over the measurement range of 1.0 m, the maximum nonlinearity error is 0.8% full-scale span (FSS), the maximum repeatability error is 0.5% FSS, and the maximum hysteresis error is reduced from 0.7% FSS to 0.5% FSS by applying software compensation algorithms. PMID:26029949

  4. Design and implementation of an intrinsically safe liquid-level sensor using coaxial cable.

    PubMed

    Jin, Baoquan; Liu, Xin; Bai, Qing; Wang, Dong; Wang, Yu

    2015-05-28

    Real-time detection of liquid level in complex environments has always been a knotty issue. In this paper, an intrinsically safe liquid-level sensor system for flammable and explosive environments is designed and implemented. The poly vinyl chloride (PVC) coaxial cable is chosen as the sensing element and the measuring mechanism is analyzed. Then, the capacitance-to-voltage conversion circuit is designed and the expected output signal is achieved by adopting parameter optimization. Furthermore, the experimental platform of the liquid-level sensor system is constructed, which involves the entire process of measuring, converting, filtering, processing, visualizing and communicating. Additionally, the system is designed with characteristics of intrinsic safety by limiting the energy of the circuit to avoid or restrain the thermal effects and sparks. Finally, the approach of the piecewise linearization is adopted in order to improve the measuring accuracy by matching the appropriate calibration points. The test results demonstrate that over the measurement range of 1.0 m, the maximum nonlinearity error is 0.8% full-scale span (FSS), the maximum repeatability error is 0.5% FSS, and the maximum hysteresis error is reduced from 0.7% FSS to 0.5% FSS by applying software compensation algorithms.

  5. Tailored Codes for Small Quantum Memories

    NASA Astrophysics Data System (ADS)

    Robertson, Alan; Granade, Christopher; Bartlett, Stephen D.; Flammia, Steven T.

    2017-12-01

    We demonstrate that small quantum memories, realized via quantum error correction in multiqubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10 000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent seven-qubit trapped ion experiment.

  6. Evaluation and Analysis of F-16XL Wind Tunnel Data From Static and Dynamic Tests

    NASA Technical Reports Server (NTRS)

    Kim, Sungwan; Murphy, Patrick C.; Klein, Vladislav

    2004-01-01

    A series of wind tunnel tests were conducted in the NASA Langley Research Center as part of an ongoing effort to develop and test mathematical models for aircraft rigid-body aerodynamics in nonlinear unsteady flight regimes. Analysis of measurement accuracy, especially for nonlinear dynamic systems that may exhibit complicated behaviors, is an essential component of this ongoing effort. In this report, tools for harmonic analysis of dynamic data and assessing measurement accuracy are presented. A linear aerodynamic model is assumed that is appropriate for conventional forced-oscillation experiments, although more general models can be used with these tools. Application of the tools to experimental data is demonstrated and results indicate the levels of uncertainty in output measurements that can arise from experimental setup, calibration procedures, mechanical limitations, and input errors.

  7. Diode laser spectroscopy: precise spectral line shape measurements

    NASA Astrophysics Data System (ADS)

    Nadezhdinskii, A. I.

    1996-07-01

    When one speaks about modern trends in tunable diode laser spectroscopy (TDLS) one should mention that precise line shape measurements have become one of the most promising applications of diode lasers in high resolution molecular spectroscopy. Accuracy limitations of TDL spectrometers are considered in this paper, proving the ability to measure spectral line profile with precision better than 1%. A four parameter Voigt profile is used to fit the experimental spectrum, and the possibility of line shift measurements with an accuracy of 2 × 10 -5 cm -1 is shown. Test experiments demonstrate the error line intensity ratios to be less than 0.3% for the proposed approach. Differences between "soft" and "hard" models of line shape have been observed experimentally for the first time. Some observed resonance effects are considered with respect to collision adiabacity.

  8. Testing large aspheric surfaces with complementary annular subaperture interferometric method

    NASA Astrophysics Data System (ADS)

    Hou, Xi; Wu, Fan; Lei, Baiping; Fan, Bin; Chen, Qiang

    2008-07-01

    Annular subaperture interferometric method has provided an alternative solution to testing rotationally symmetric aspheric surfaces with low cost and flexibility. However, some new challenges, particularly in the motion and algorithm components, appear when applied to large aspheric surfaces with large departure in the practical engineering. Based on our previously reported annular subaperture reconstruction algorithm with Zernike annular polynomials and matrix method, and the experimental results for an approximate 130-mm diameter and f/2 parabolic mirror, an experimental investigation by testing an approximate 302-mm diameter and f/1.7 parabolic mirror with the complementary annular subaperture interferometric method is presented. We have focused on full-aperture reconstruction accuracy, and discuss some error effects and limitations of testing larger aspheric surfaces with the annular subaperture method. Some considerations about testing sector segment with complementary sector subapertures are provided.

  9. Scalable randomized benchmarking of non-Clifford gates

    NASA Astrophysics Data System (ADS)

    Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay

    Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.

  10. Modeling and evaluating the performance of Brillouin distributed optical fiber sensors.

    PubMed

    Soto, Marcelo A; Thévenaz, Luc

    2013-12-16

    A thorough analysis of the key factors impacting on the performance of Brillouin distributed optical fiber sensors is presented. An analytical expression is derived to estimate the error on the determination of the Brillouin peak gain frequency, based for the first time on real experimental conditions. This expression is experimentally validated, and describes how this frequency uncertainty depends on measurement parameters, such as Brillouin gain linewidth, frequency scanning step and signal-to-noise ratio. Based on the model leading to this expression and considering the limitations imposed by nonlinear effects and pump depletion, a figure-of-merit is proposed to fairly compare the performance of Brillouin distributed sensing systems. This figure-of-merit offers to the research community and to potential users the possibility to evaluate with an objective metric the real performance gain resulting from any proposed configuration.

  11. Sizing aerosolized fractal nanoparticle aggregates through Bayesian analysis of wide-angle light scattering (WALS) data

    NASA Astrophysics Data System (ADS)

    Huber, Franz J. T.; Will, Stefan; Daun, Kyle J.

    2016-11-01

    Inferring the size distribution of aerosolized fractal aggregates from the angular distribution of elastically scattered light is a mathematically ill-posed problem. This paper presents a procedure for analyzing Wide-Angle Light Scattering (WALS) data using Bayesian inference. The outcome is probability densities for the recovered size distribution and aggregate morphology parameters. This technique is applied to both synthetic data and experimental data collected on soot-laden aerosols, using a measurement equation derived from Rayleigh-Debye-Gans fractal aggregate (RDG-FA) theory. In the case of experimental data, the recovered aggregate size distribution parameters are generally consistent with TEM-derived values, but the accuracy is impaired by the well-known limited accuracy of RDG-FA theory. Finally, we show how this bias could potentially be avoided using the approximation error technique.

  12. OVERFLOW Validation for Predicting Plume Impingement of Underexpanded Axisymmetric Jets onto Angled Flat Plates

    NASA Technical Reports Server (NTRS)

    Lee, Henry C.; Klopfer, Goetz

    2011-01-01

    This report documents how OVERFLOW, a computational fluid dynamics code, predicts plume impingement of underexpanded axisymmetric jets onto both perpendicular and inclined flat plates. The effects of the plume impinging on a range of plate inclinations varying from 90deg to 30deg are investigated and compared to the experimental results in Reference 1 and 2. The flow fields are extremely complex due to the interaction between the shock waves from the free jet and those deflected by the plate. Additionally, complex mixing effects create very intricate structures in the flow. The experimental data is very limited, so these validation studies will focus only on cold plume impingement on flat and inclined plates. This validation study will help quantify the error in the OVERFLOW simulation when applied to stage separation scenarios.

  13. Statistical Models for Averaging of the Pump–Probe Traces: Example of Denoising in Terahertz Time-Domain Spectroscopy

    NASA Astrophysics Data System (ADS)

    Skorobogatiy, Maksim; Sadasivan, Jayesh; Guerboukha, Hichem

    2018-05-01

    In this paper, we first discuss the main types of noise in a typical pump-probe system, and then focus specifically on terahertz time domain spectroscopy (THz-TDS) setups. We then introduce four statistical models for the noisy pulses obtained in such systems, and detail rigorous mathematical algorithms to de-noise such traces, find the proper averages and characterise various types of experimental noise. Finally, we perform a comparative analysis of the performance, advantages and limitations of the algorithms by testing them on the experimental data collected using a particular THz-TDS system available in our laboratories. We conclude that using advanced statistical models for trace averaging results in the fitting errors that are significantly smaller than those obtained when only a simple statistical average is used.

  14. Accounting for optical errors in microtensiometry.

    PubMed

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup and increases experiential accuracy. In a broad sense, this work outlines the importance of optical errors in all DSA techniques. More specifically, these results have important implications for all microscale and microfluidic measurements of interface curvature. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ 2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurementmore » errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.« less

  16. A composite experimental dynamic substructuring method based on partitioned algorithms and localized Lagrange multipliers

    NASA Astrophysics Data System (ADS)

    Abbiati, Giuseppe; La Salandra, Vincenzo; Bursi, Oreste S.; Caracoglia, Luca

    2018-02-01

    Successful online hybrid (numerical/physical) dynamic substructuring simulations have shown their potential in enabling realistic dynamic analysis of almost any type of non-linear structural system (e.g., an as-built/isolated viaduct, a petrochemical piping system subjected to non-stationary seismic loading, etc.). Moreover, owing to faster and more accurate testing equipment, a number of different offline experimental substructuring methods, operating both in time (e.g. the impulse-based substructuring) and frequency domains (i.e. the Lagrange multiplier frequency-based substructuring), have been employed in mechanical engineering to examine dynamic substructure coupling. Numerous studies have dealt with the above-mentioned methods and with consequent uncertainty propagation issues, either associated with experimental errors or modelling assumptions. Nonetheless, a limited number of publications have systematically cross-examined the performance of the various Experimental Dynamic Substructuring (EDS) methods and the possibility of their exploitation in a complementary way to expedite a hybrid experiment/numerical simulation. From this perspective, this paper performs a comparative uncertainty propagation analysis of three EDS algorithms for coupling physical and numerical subdomains with a dual assembly approach based on localized Lagrange multipliers. The main results and comparisons are based on a series of Monte Carlo simulations carried out on a five-DoF linear/non-linear chain-like systems that include typical aleatoric uncertainties emerging from measurement errors and excitation loads. In addition, we propose a new Composite-EDS (C-EDS) method to fuse both online and offline algorithms into a unique simulator. Capitalizing from the results of a more complex case study composed of a coupled isolated tank-piping system, we provide a feasible way to employ the C-EDS method when nonlinearities and multi-point constraints are present in the emulated system.

  17. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  18. The use of experimental data in an MTR-type nuclear reactor safety analysis

    NASA Astrophysics Data System (ADS)

    Day, Simon E.

    Reactivity initiated accidents (RIAs) are a category of events required for research reactor safety analysis. A subset of this is unprotected RIAs in which mechanical systems or human intervention are not credited in the response of the system. Light-water cooled and moderated MTR-type ( i.e., aluminum-clad uranium plate fuel) reactors are self-limiting up to some reactivity insertion limit beyond which fuel damage occurs. This characteristic was studied in the Borax and Spert reactor tests of the 1950s and 1960s in the USA. This thesis considers the use of this experimental data in generic MTR-type reactor safety analysis. The approach presented herein is based on fundamental phenomenological understanding and uses correlations in the reactor test data with suitable account taken for differences in important system parameters. Specifically, a semi-empirical approach is used to quantify the relationship between the power, energy and temperature rise response of the system as well as parametric dependencies on void coefficient and the degree of subcooling. Secondary effects including the dependence on coolant flow are also examined. A rigorous curve fitting approach and error assessment is used to quantify the trends in the experimental data. In addition to the initial power burst stage of an unprotected transient, the longer term stability of the system is considered with a stylized treatment of characteristic power/temperature oscillations (chugging). A bridge from the HEU-based experimental data to the LEU fuel cycle is assessed and outlined based on existing simulation results presented in the literature. A cell-model based parametric study is included. The results are used to construct a practical safety analysis methodology for determining reactivity insertion safety limits for a light-water moderated and cooled MTR-type core.

  19. QUANTIFYING ALTERNATIVE SPLICING FROM PAIRED-END RNA-SEQUENCING DATA.

    PubMed

    Rossell, David; Stephan-Otto Attolini, Camille; Kroiss, Manuel; Stöcker, Almond

    2014-03-01

    RNA-sequencing has revolutionized biomedical research and, in particular, our ability to study gene alternative splicing. The problem has important implications for human health, as alternative splicing may be involved in malfunctions at the cellular level and multiple diseases. However, the high-dimensional nature of the data and the existence of experimental biases pose serious data analysis challenges. We find that the standard data summaries used to study alternative splicing are severely limited, as they ignore a substantial amount of valuable information. Current data analysis methods are based on such summaries and are hence sub-optimal. Further, they have limited flexibility in accounting for technical biases. We propose novel data summaries and a Bayesian modeling framework that overcome these limitations and determine biases in a non-parametric, highly flexible manner. These summaries adapt naturally to the rapid improvements in sequencing technology. We provide efficient point estimates and uncertainty assessments. The approach allows to study alternative splicing patterns for individual samples and can also be the basis for downstream analyses. We found a several fold improvement in estimation mean square error compared popular approaches in simulations, and substantially higher consistency between replicates in experimental data. Our findings indicate the need for adjusting the routine summarization and analysis of alternative splicing RNA-seq studies. We provide a software implementation in the R package casper.

  20. Error Model and Compensation of Bell-Shaped Vibratory Gyro

    PubMed Central

    Su, Zhong; Liu, Ning; Li, Qing

    2015-01-01

    A bell-shaped vibratory angular velocity gyro (BVG), inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement. PMID:26393593

  1. Scaled CMOS Technology Reliability Users Guide

    NASA Technical Reports Server (NTRS)

    White, Mark

    2010-01-01

    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.

  2. The flash memory battle: How low can we go?

    NASA Astrophysics Data System (ADS)

    van Setten, Eelco; Wismans, Onno; Grim, Kees; Finders, Jo; Dusa, Mircea; Birkner, Robert; Richter, Rigo; Scherübl, Thomas

    2008-03-01

    With the introduction of the TWINSCAN XT:1900Gi the limit of the water based hyper-NA immersion lithography has been reached in terms of resolution. With a numerical aperture of 1.35 a single expose resolution of 36.5nm half pitch has been demonstrated. However the practical resolution limit in production will be closer to 40nm half pitch, without having to go to double patterning alike strategies. In the relentless Flash memory market the performance of the exposure tool is stretched to the limit for a competitive advantage and cost-effective product. In this paper we will present the results of an experimental study of the resolution limit of the NAND-Flash Memory Gate layer for a production-worthy process on the TWINSCAN XT:1900Gi. The entire gate layer will be qualified in terms of full wafer CD uniformity, aberration sensitivities for the different wordlines and feature-center placement errors for 38, 39, 40 and 43nm half pitch design rule. In this study we will also compare the performance of a binary intensity mask to a 6% attenuated phase shift mask and look at strategies to maximize Depth of Focus, and to desensitize the gate layer for lens aberrations and placement errors. The mask is one of the dominant contributors to the CD uniformity budget of the flash gate layer. Therefore the wafer measurements are compared to aerial image measurements of the mask using AIMSTM 45-193i to separate the mask contribution from the scanner contribution to the final imaging performance.

  3. The Storage Ring Proton EDM Experiment

    NASA Astrophysics Data System (ADS)

    Semertzidis, Yannis; Storage Ring Proton EDM Collaboration

    2014-09-01

    The storage ring pEDM experiment utilizes an all-electric storage ring to store ~1011 longitudinally polarized protons simultaneously in clock-wise and counter-clock-wise directions for 103 seconds. The radial E-field acts on the proton EDM for the duration of the storage time to precess its spin in the vertical plane. The ring lattice is optimized to reduce intra-beam scattering, increase the statistical sensitivity and reduce the systematic errors of the method. The main systematic error is a net radial B-field integrated around the ring causing an EDM-like vertical spin precession. The counter-rotating beams sense this integrated field and are vertically shifted by an amount, which depends on the strength of the vertical focusing in the ring, thus creating a radial B-field. Modulating the vertical focusing at 10 kHz makes possible the detection of this radial B-field by a SQUID-magnetometer (SQUID-based BPM). For a total number of n SQUID-based BPMs distributed around the ring the effectiveness of the method is limited to the N = n /2 harmonic of the background radial B-field due to the Nyquist sampling theorem limit. This limitation establishes the requirement to reduce the maximum radial B-field to 0.1-1 nT everywhere around the ring by layers of mu-metal and aluminum vacuum tube. The metho's sensitivity is 10-29 e .cm , more than three orders of magnitude better than the present neutron EDM experimental limit, making it sensitive to SUSY-like new physics mass scale up to 300 TeV.

  4. Empirical Synthesis of the Effect of Standard Error of Measurement on Decisions Made within Brief Experimental Analyses of Reading Fluency

    ERIC Educational Resources Information Center

    Burns, Matthew K.; Taylor, Crystal N.; Warmbold-Brann, Kristy L.; Preast, June L.; Hosp, John L.; Ford, Jeremy W.

    2017-01-01

    Intervention researchers often use curriculum-based measurement of reading fluency (CBM-R) with a brief experimental analysis (BEA) to identify an effective intervention for individual students. The current study synthesized data from 22 studies that used CBM-R data within a BEA by computing the standard error of measure (SEM) for the median data…

  5. Can Infinitival "to" Omissions and Provisions Be Primed? An Experimental Investigation into the Role of Constructional Competition in Infinitival "to" Omission Errors

    ERIC Educational Resources Information Center

    Kirjavainen, Minna; Lieven, Elena V. M.; Theakston, Anna L.

    2017-01-01

    An experimental study was conducted on children aged 2;6-3;0 and 3;6-4;0 investigating the priming effect of two WANT-constructions to establish whether constructional competition contributes to English-speaking children's infinitival to omission errors (e.g., *"I want ___ jump now"). In two between-participant groups, children either…

  6. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R. E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  7. Cardiorespiratory fitness and cognitive functioning following short-term interventions in chronic stroke survivors with cognitive impairment: a pilot study.

    PubMed

    Blanchet, Sophie; Richards, Carol L; Leblond, Jean; Olivier, Charles; Maltais, Désirée B

    2016-06-01

    This study, a quasi-experimental, one-group pretest-post-test design, evaluated the effects on cognitive functioning and cardiorespiratory fitness of 8-week interventions (aerobic exercise alone and aerobic exercise and cognitive training combined) in patients with chronic stroke and cognitive impairment living in the community (participants: n=14, 61.93±9.90 years old, 51.50±38.22 months after stroke, n=7 per intervention group). Cognitive functions and cardiorespiratory fitness were evaluated before and after intervention, and at a 3-month follow-up visit (episodic memory: revised-Hopkins Verbal Learning Test; working memory: Brown-Peterson paradigm; attention omission and commission errors: Continuous Performance Test; cardiorespiratory fitness: peak oxygen uptake during a symptom-limited, graded exercise test performed on a semirecumbent ergometer). Friedman's two-way analysis of variance by ranks evaluated differences in score distributions related to time (for the two groups combined). Post-hoc testing was adjusted for multiple comparisons. Compared with before the intervention, there was a significant reduction in attention errors immediately following the intervention (omission errors: 14.6±21.5 vs. 8±13.9, P=0.01; commission errors: 16.4±6.3 vs. 10.9±7.2, P=0.04), and in part at follow-up (omission errors on follow-up: 3.4±4.3, P=0.03; commission errors on follow-up: 13.2±7.6, P=0.42). These results suggest that attention may improve in chronic stroke survivors with cognitive impairment following short-term training that includes an aerobic component, without a change in cardiorespiratory fitness. Randomized-controlled studies are required to confirm these findings.

  8. Testing of a novel pin array guide for accurate three-dimensional glenoid component positioning.

    PubMed

    Lewis, Gregory S; Stevens, Nicole M; Armstrong, April D

    2015-12-01

    A substantial challenge in total shoulder replacement is accurate positioning and alignment of the glenoid component. This challenge arises from limited intraoperative exposure and complex arthritic-driven deformity. We describe a novel pin array guide and method for patient-specific guiding of the glenoid central drill hole. We also experimentally tested the hypothesis that this method would reduce errors in version and inclination compared with 2 traditional methods. Polymer models of glenoids were created from computed tomography scans from 9 arthritic patients. Each 3-dimensional (3D) printed scapula was shrouded to simulate the operative situation. Three different methods for central drill alignment were tested, all with the target orientation of 5° retroversion and 0° inclination: no assistance, assistance by preoperative 3D imaging, and assistance by the pin array guide. Version and inclination errors of the drill line were compared. Version errors using the pin array guide (3° ± 2°) were significantly lower than version errors associated with no assistance (9° ± 7°) and preoperative 3D imaging (8° ± 6°). Inclination errors were also significantly lower using the pin array guide compared with no assistance. The new pin array guide substantially reduced errors in orientation of the central drill line. The guide method is patient specific but does not require rapid prototyping and instead uses adjustments to an array of pins based on automated software calculations. This method may ultimately provide a cost-effective solution enabling surgeons to obtain accurate orientation of the glenoid. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  9. Robust dynamical decoupling for quantum computing and quantum memory.

    PubMed

    Souza, Alexandre M; Alvarez, Gonzalo A; Suter, Dieter

    2011-06-17

    Dynamical decoupling (DD) is a popular technique for protecting qubits from the environment. However, unless special care is taken, experimental errors in the control pulses used in this technique can destroy the quantum information instead of preserving it. Here, we investigate techniques for making DD sequences robust against different types of experimental errors while retaining good decoupling efficiency in a fluctuating environment. We present experimental data from solid-state nuclear spin qubits and introduce a new DD sequence that is suitable for quantum computing and quantum memory.

  10. Error Mitigation for Short-Depth Quantum Circuits

    NASA Astrophysics Data System (ADS)

    Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.

    2017-11-01

    Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.

  11. Monitoring Building Deformation with InSAR: Experiments and Validation

    PubMed Central

    Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng

    2016-01-01

    Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated. PMID:27999403

  12. Structural power flow measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falter, K.J.; Keltie, R.F.

    Previous investigations of structural power flow through beam-like structures resulted in some unexplained anomalies in the calculated data. In order to develop structural power flow measurement as a viable technique for machine tool design, the causes of these anomalies needed to be found. Once found, techniques for eliminating the errors could be developed. Error sources were found in the experimental apparatus itself as well as in the instrumentation. Although flexural waves are the carriers of power in the experimental apparatus, at some frequencies longitudinal waves were excited which were picked up by the accelerometers and altered power measurements. Errors weremore » found in the phase and gain response of the sensors and amplifiers used for measurement. A transfer function correction technique was employed to compensate for these instrumentation errors.« less

  13. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  14. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.

  15. Theoretical and experimental studies of turbo product code with time diversity in free space optical communication.

    PubMed

    Han, Yaoqiang; Dang, Anhong; Ren, Yongxiong; Tang, Junxiong; Guo, Hong

    2010-12-20

    In free space optical communication (FSOC) systems, channel fading caused by atmospheric turbulence degrades the system performance seriously. However, channel coding combined with diversity techniques can be exploited to mitigate channel fading. In this paper, based on the experimental study of the channel fading effects, we propose to use turbo product code (TPC) as the channel coding scheme, which features good resistance to burst errors and no error floor. However, only channel coding cannot cope with burst errors caused by channel fading, interleaving is also used. We investigate the efficiency of interleaving for different interleaving depths, and then the optimum interleaving depth for TPC is also determined. Finally, an experimental study of TPC with interleaving is demonstrated, and we show that TPC with interleaving can significantly mitigate channel fading in FSOC systems.

  16. Maximum likelihood bolometric tomography for the determination of the uncertainties in the radiation emission on JET TOKAMAK

    NASA Astrophysics Data System (ADS)

    Craciunescu, Teddy; Peluso, Emmanuele; Murari, Andrea; Gelfusa, Michela; JET Contributors

    2018-05-01

    The total emission of radiation is a crucial quantity to calculate the power balances and to understand the physics of any Tokamak. Bolometric systems are the main tool to measure this important physical quantity through quite sophisticated tomographic inversion methods. On the Joint European Torus, the coverage of the bolometric diagnostic, due to the availability of basically only two projection angles, is quite limited, rendering the inversion a very ill-posed mathematical problem. A new approach, based on the maximum likelihood, has therefore been developed and implemented to alleviate one of the major weaknesses of traditional tomographic techniques: the difficulty to determine routinely the confidence intervals in the results. The method has been validated by numerical simulations with phantoms to assess the quality of the results and to optimise the configuration of the parameters for the main types of emissivity encountered experimentally. The typical levels of statistical errors, which may significantly influence the quality of the reconstructions, have been identified. The systematic tests with phantoms indicate that the errors in the reconstructions are quite limited and their effect on the total radiated power remains well below 10%. A comparison with other approaches to the inversion and to the regularization has also been performed.

  17. Deformed Shape Calculation of a Full-Scale Wing Using Fiber Optic Strain Data from a Ground Loads Test

    NASA Technical Reports Server (NTRS)

    Jutte, Christine V.; Ko, William L.; Stephens, Craig A.; Bakalyar, John A.; Richards, W. Lance

    2011-01-01

    A ground loads test of a full-scale wing (175-ft span) was conducted using a fiber optic strain-sensing system to obtain distributed surface strain data. These data were input into previously developed deformed shape equations to calculate the wing s bending and twist deformation. A photogrammetry system measured actual shape deformation. The wing deflections reached 100 percent of the positive design limit load (equivalent to 3 g) and 97 percent of the negative design limit load (equivalent to -1 g). The calculated wing bending results were in excellent agreement with the actual bending; tip deflections were within +/- 2.7 in. (out of 155-in. max deflection) for 91 percent of the load steps. Experimental testing revealed valuable opportunities for improving the deformed shape equations robustness to real world (not perfect) strain data, which previous analytical testing did not detect. These improvements, which include filtering methods developed in this work, minimize errors due to numerical anomalies discovered in the remaining 9 percent of the load steps. As a result, all load steps attained +/- 2.7 in. accuracy. Wing twist results were very sensitive to errors in bending and require further development. A sensitivity analysis and recommendations for fiber implementation practices, along with, effective filtering methods are included

  18. Adaptive Harmonic Balance Method for Unsteady, Nonlinear, One-Dimensional Periodic Flows

    DTIC Science & Technology

    2002-09-01

    Design and Implemen- tation. May 1999. REF-2 23. Toro , Eleuterio F . Fiemann Solvers and Numerical Methods for Fluid Dynamics, chapter 15. New York...prominent for high-frequency unsteady-flows. Experimental Analysis of Splitting-induced Error To assess the actual effect of splitting error on a...VITA-1 vi List of Figures Figure Page 1.1. Experimental Pressure Data on Inlet Guide Vane Upstream of Transonic Rotating

  19. Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits †

    PubMed Central

    Gámez Serna, Citlalli; Ruichek, Yassine

    2017-01-01

    A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle’s speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, ‘curve analysis extraction’ and ‘speed limits database creation’ are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger. PMID:28613251

  20. Efficient boundary hunting via vector quantization

    NASA Astrophysics Data System (ADS)

    Diamantini, Claudia; Panti, Maurizio

    2001-03-01

    A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.

  1. A High Temperature Capacitive Pressure Sensor Based on Alumina Ceramic for in Situ Measurement at 600 °C

    PubMed Central

    Tan, Qiulin; Li, Chen; Xiong, Jijun; Jia, Pinggang; Zhang, Wendong; Liu, Jun; Xue, Chenyang; Hong, Yingping; Ren, Zhong; Luo, Tao

    2014-01-01

    In response to the growing demand for in situ measurement of pressure in high-temperature environments, a high temperature capacitive pressure sensor is presented in this paper. A high-temperature ceramic material-alumina is used for the fabrication of the sensor, and the prototype sensor consists of an inductance, a variable capacitance, and a sealed cavity integrated in the alumina ceramic substrate using a thick-film integrated technology. The experimental results show that the proposed sensor has stability at 850 °C for more than 20 min. The characterization in high-temperature and pressure environments successfully demonstrated sensing capabilities for pressure from 1 to 5 bar up to 600 °C, limited by the sensor test setup. At 600 °C, the sensor achieves a linear characteristic response, and the repeatability error, hysteresis error and zero-point drift of the sensor are 8.3%, 5.05% and 1%, respectively. PMID:24487624

  2. Analysis of Factors Influencing Measurement Accuracy of Al Alloy Tensile Test Results

    NASA Astrophysics Data System (ADS)

    Podgornik, Bojan; Žužek, Borut; Sedlaček, Marko; Kevorkijan, Varužan; Hostej, Boris

    2016-02-01

    In order to properly use materials in design, a complete understanding of and information on their mechanical properties, such as yield and ultimate tensile strength must be obtained. Furthermore, as the design of automotive parts is constantly pushed toward higher limits, excessive measuring uncertainty can lead to unexpected premature failure of the component, thus requiring reliable determination of material properties with low uncertainty. The aim of the present work was to evaluate the effect of different metrology factors, including the number of tested samples, specimens machining and surface quality, specimens input diameter, type of testing and human error on the tensile test results and measurement uncertainty when performed on 2xxx series Al alloy. Results show that the most significant contribution to measurement uncertainty comes from the number of samples tested, which can even exceed 1 %. Furthermore, moving from experimental laboratory conditions to very intense industrial environment further amplifies measurement uncertainty, where even if using automated systems human error cannot be neglected.

  3. Jumping to the wrong conclusions? An investigation of the mechanisms of reasoning errors in delusions

    PubMed Central

    Jolley, Suzanne; Thompson, Claire; Hurley, James; Medin, Evelina; Butler, Lucy; Bebbington, Paul; Dunn, Graham; Freeman, Daniel; Fowler, David; Kuipers, Elizabeth; Garety, Philippa

    2014-01-01

    Understanding how people with delusions arrive at false conclusions is central to the refinement of cognitive behavioural interventions. Making hasty decisions based on limited data (‘jumping to conclusions’, JTC) is one potential causal mechanism, but reasoning errors may also result from other processes. In this study, we investigated the correlates of reasoning errors under differing task conditions in 204 participants with schizophrenia spectrum psychosis who completed three probabilistic reasoning tasks. Psychotic symptoms, affect, and IQ were also evaluated. We found that hasty decision makers were more likely to draw false conclusions, but only 37% of their reasoning errors were consistent with the limited data they had gathered. The remainder directly contradicted all the presented evidence. Reasoning errors showed task-dependent associations with IQ, affect, and psychotic symptoms. We conclude that limited data-gathering contributes to false conclusions but is not the only mechanism involved. Delusions may also be maintained by a tendency to disregard evidence. Low IQ and emotional biases may contribute to reasoning errors in more complex situations. Cognitive strategies to reduce reasoning errors should therefore extend beyond encouragement to gather more data, and incorporate interventions focused directly on these difficulties. PMID:24958065

  4. Lexical and phonological variability in preschool children with speech sound disorder.

    PubMed

    Macrae, Toby; Tyler, Ann A; Lewis, Kerry E

    2014-02-01

    The authors of this study examined relationships between measures of word and speech error variability and between these and other speech and language measures in preschool children with speech sound disorder (SSD). In this correlational study, 18 preschool children with SSD, age-appropriate receptive vocabulary, and normal oral motor functioning and hearing were assessed across 2 sessions. Experimental measures included word and speech error variability, receptive vocabulary, nonword repetition (NWR), and expressive language. Pearson product–moment correlation coefficients were calculated among the experimental measures. The correlation between word and speech error variability was slight and nonsignificant. The correlation between word variability and receptive vocabulary was moderate and negative, although nonsignificant. High word variability was associated with small receptive vocabularies. The correlations between speech error variability and NWR and between speech error variability and the mean length of children's utterances were moderate and negative, although both were nonsignificant. High speech error variability was associated with poor NWR and language scores. High word variability may reflect unstable lexical representations, whereas high speech error variability may reflect indistinct phonological representations. Preschool children with SSD who show abnormally high levels of different types of speech variability may require slightly different approaches to intervention.

  5. Influence of ECG measurement accuracy on ECG diagnostic statements.

    PubMed

    Zywietz, C; Celikag, D; Joseph, G

    1996-01-01

    Computer analysis of electrocardiograms (ECGs) provides a large amount of ECG measurement data, which may be used for diagnostic classification and storage in ECG databases. Until now, neither error limits for ECG measurements have been specified nor has their influence on diagnostic statements been systematically investigated. An analytical method is presented to estimate the influence of measurement errors on the accuracy of diagnostic ECG statements. Systematic (offset) errors will usually result in an increase of false positive or false negative statements since they cause a shift of the working point on the receiver operating characteristics curve. Measurement error dispersion broadens the distribution function of discriminative measurement parameters and, therefore, usually increases the overlap between discriminative parameters. This results in a flattening of the receiver operating characteristics curve and an increase of false positive and false negative classifications. The method developed has been applied to ECG conduction defect diagnoses by using the proposed International Electrotechnical Commission's interval measurement tolerance limits. These limits appear too large because more than 30% of false positive atrial conduction defect statements and 10-18% of false intraventricular conduction defect statements could be expected due to tolerated measurement errors. To assure long-term usability of ECG measurement databases, it is recommended that systems provide its error tolerance limits obtained on a defined test set.

  6. Extended solvent-contact model approach to blind SAMPL5 prediction challenge for the distribution coefficients of drug-like molecules

    NASA Astrophysics Data System (ADS)

    Chung, Kee-Choo; Park, Hwangseo

    2016-11-01

    The performance of the extended solvent-contact model has been addressed in the SAMPL5 blind prediction challenge for distribution coefficient (LogD) of drug-like molecules with respect to the cyclohexane/water partitioning system. All the atomic parameters defined for 41 atom types in the solvation free energy function were optimized by operating a standard genetic algorithm with respect to water and cyclohexane solvents. In the parameterizations for cyclohexane, the experimental solvation free energy (Δ G sol ) data of 15 molecules for 1-octanol were combined with those of 77 molecules for cyclohexane to construct a training set because Δ G sol values of the former were unavailable for cyclohexane in publicly accessible databases. Using this hybrid training set, we established the LogD prediction model with the correlation coefficient ( R), average error (AE), and root mean square error (RMSE) of 0.55, 1.53, and 3.03, respectively, for the comparison of experimental and computational results for 53 SAMPL5 molecules. The modest accuracy in LogD prediction could be attributed to the incomplete optimization of atomic solvation parameters for cyclohexane. With respect to 31 SAMPL5 molecules containing the atom types for which experimental reference data for Δ G sol were available for both water and cyclohexane, the accuracy in LogD prediction increased remarkably with the R, AE, and RMSE values of 0.82, 0.89, and 1.60, respectively. This significant enhancement in performance stemmed from the better optimization of atomic solvation parameters by limiting the element of training set to the molecules with experimental Δ G sol data for cyclohexane. Due to the simplicity in model building and to low computational cost for parameterizations, the extended solvent-contact model is anticipated to serve as a valuable computational tool for LogD prediction upon the enrichment of experimental Δ G sol data for organic solvents.

  7. Do errors matter? Errorless and errorful learning in anomic picture naming.

    PubMed

    McKissock, Stephen; Ward, Jamie

    2007-06-01

    Errorless training methods significantly improve learning in memory-impaired patients relative to errorful training procedures. However, the validity of this technique for acquiring linguistic information in aphasia has rarely been studied. This study contrasts three different treatment conditions over an 8 week period for rehabilitating picture naming in anomia: (1) errorless learning in which pictures are shown and the experimenter provides the name, (2) errorful learning with feedback in which the patient is required to generate a name but the correct name is then supplied by the experimenter, and (3) errorful learning in which no feedback is given. These conditions are compared to an untreated set of matched words. Both errorless and errorful learning with feedback conditions led to significant improvement at a 2-week and 12-14-week retest (errorful without feedback and untreated words were similar). The results suggest that it does not matter whether anomic patients are allowed to make errors in picture naming or not (unlike in memory impaired individuals). What does matter is that a correct response is given as feedback. The results also question the widely held assumption that it is beneficial for a patient to attempt to retrieve a word, given that our errorless condition involved no retrieval effort and had the greatest benefits.

  8. A Practical Methodology for Quantifying Random and Systematic Components of Unexplained Variance in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.

    2012-01-01

    This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.

  9. Decomposition of Composite Electric Field in a Three-Phase D-Dot Voltage Transducer Measuring System

    PubMed Central

    Hu, Xueqi; Wang, Jingang; Wei, Gang; Deng, Xudong

    2016-01-01

    In line with the wider application of non-contact voltage transducers in the engineering field, transducers are required to have better performance for different measuring environments. In the present study, the D-dot voltage transducer is further improved based on previous research in order to meet the requirements for long-distance measurement of electric transmission lines. When measuring three-phase electric transmission lines, problems such as synchronous data collection and composite electric field need to be resolved. A decomposition method is proposed with respect to the superimposed electric field generated between neighboring phases. The charge simulation method is utilized to deduce the decomposition equation of the composite electric field and the validity of the proposed method is verified by simulation calculation software. With the deduced equation as the algorithm foundation, this paper improves hardware circuits, establishes a measuring system and constructs an experimental platform for examination. Under experimental conditions, a 10 kV electric transmission line was tested for steady-state errors, and the measuring results of the transducer and the high-voltage detection head were compared. Ansoft Maxwell Stimulation Software was adopted to obtain the electric field intensity in different positions under transmission lines; its values and the measuring values of the transducer were also compared. Experimental results show that the three-phase transducer is characterized by a relatively good synchronization for data measurement, measuring results with high precision, and an error ratio within a prescribed limit. Therefore, the proposed three-phase transducer can be broadly applied and popularized in the engineering field. PMID:27754340

  10. Pressure Distribution on Inner Wall of Parabolic Nozzle in Laser Propulsion with Single Pulse

    NASA Astrophysics Data System (ADS)

    Cui, Cunyan; Hong, Yanji; Wen, Ming; Song, Junling; Fang, Juan

    2011-11-01

    A system based of dynamic pressure sensors was established to study the time resolved pressure distribution on the inner wall of a parabolic nozzle in laser propulsion. Dynamic calibration and static calibration of the test system were made and the results showed that frequency response was up to 412 kHz and linear error was less than 10%. Experimental model was a parabolic nozzle and three test points were preset along one generating line. This study showed that experimental results agreed well with those obtained by numerical calculation way in pressure evolution tendency. The peak value of the calculation was higher than that of the experiment at each tested orifice because of the limitation of the numerical models. The results of this study were very useful for analyzing the energy deposition in laser propulsion and modifying numerical models.

  11. Experimental evaluation of achromatic phase shifters for mid-infrared starlight suppression.

    PubMed

    Gappinger, Robert O; Diaz, Rosemary T; Ksendzov, Alexander; Lawson, Peter R; Lay, Oliver P; Liewer, Kurt M; Loya, Frank M; Martin, Stefan R; Serabyn, Eugene; Wallace, James K

    2009-02-10

    Phase shifters are a key component of nulling interferometry, one of the potential routes to enabling the measurement of faint exoplanet spectra. Here, three different achromatic phase shifters are evaluated experimentally in the mid-infrared, where such nulling interferometers may someday operate. The methods evaluated include the use of dispersive glasses, a through-focus field inversion, and field reversals on reflection from antisymmetric flat-mirror periscopes. All three approaches yielded deep, broadband, mid-infrared nulls, but the deepest broadband nulls were obtained with the periscope architecture. In the periscope system, average null depths of 4x10(-5) were obtained with a 25% bandwidth, and 2x10(-5) with a 20% bandwidth, at a central wavelength of 9.5 mum. The best short term nulls at 20% bandwidth were approximately 9x10(-6), in line with error budget predictions and the limits of the current generation of hardware.

  12. Experimental quantum key distribution with finite-key security analysis for noisy channels.

    PubMed

    Bacco, Davide; Canale, Matteo; Laurenti, Nicola; Vallone, Giuseppe; Villoresi, Paolo

    2013-01-01

    In quantum key distribution implementations, each session is typically chosen long enough so that the secret key rate approaches its asymptotic limit. However, this choice may be constrained by the physical scenario, as in the perspective use with satellites, where the passage of one terminal over the other is restricted to a few minutes. Here we demonstrate experimentally the extraction of secure keys leveraging an optimal design of the prepare-and-measure scheme, according to recent finite-key theoretical tight bounds. The experiment is performed in different channel conditions, and assuming two distinct attack models: individual attacks or general quantum attacks. The request on the number of exchanged qubits is then obtained as a function of the key size and of the ambient quantum bit error rate. The results indicate that viable conditions for effective symmetric, and even one-time-pad, cryptography are achievable.

  13. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  14. Low sidelobe level low-cost earth station antennas for the 12 GHz broadcasting satellite service

    NASA Technical Reports Server (NTRS)

    Collin, R. E.; Gabel, L. R.

    1979-01-01

    An experimental investigation of the performance of 1.22 m and 1.83 m diameter paraboloid antennas with an f/D ratio of 0.38 and using a feed developed by Kumar is reported. It is found that sidelobes below 30 dB can be obtained only if the paraboloids are relatively free of surface errors. A theoretical analysis of clam shell distortion shows that this is a limiting factor in achieving low sidelobe levels with many commercially available low cost paraboloids. The use of absorbing pads and small reflecting plates for sidelobe reduction is also considered.

  15. Fiber-based free-space optical coherent receiver with vibration compensation mechanism.

    PubMed

    Zhang, Ruochi; Wang, Jianmin; Zhao, Guang; Lv, Junyi

    2013-07-29

    We propose a novel fiber-based free-space optical (FSO) coherent receiver for inter-satellite communication. The receiver takes advantage of established fiber-optic components and utilizes the fine-pointing subsystem installed in FSO terminals to minimize the influence of satellite platform vibrations. The received beam is coupled to a single-mode fiber, and the coupling efficiency of the system is investigated both analytically and experimentally. A receiving sensitivity of -38 dBm is obtained at the forward error correction limit with a transmission rate of 22.4 Gbit/s. The proposed receiver is shown to be a promising component for inter-satellite optical communication.

  16. Performance evaluation of a burst-mode EDFA in an optical packet and circuit integrated network.

    PubMed

    Shiraiwa, Masaki; Awaji, Yoshinari; Furukawa, Hideaki; Shinada, Satoshi; Puttnam, Benjamin J; Wada, Naoya

    2013-12-30

    We experimentally investigate the performance of burst-mode EDFA in an optical packet and circuit integrated system. In such networks, packets and light paths can be dynamically assigned to the same fibers, resulting in gain transients in EDFAs throughout the network that can limit network performance. Here, we compare the performance of a 'burst-mode' EDFA (BM-EDFA), employing transient suppression techniques and optical feedback, with conventional EDFAs, and those using automatic gain control and previous BM-EDFA implementations. We first measure gain transients and other impairments in a simplified set-up before making frame error-rate measurements in a network demonstration.

  17. Interpretation of Higher Order Magnetic effects in the Spectra of Transition Metal Ions in Terms of SO(5) and Sp(10)

    NASA Astrophysics Data System (ADS)

    Hansen, J. E.; Judd, B. R.; Raassen, A. J. J.; Uylings, P. H. M.

    1997-04-01

    Small discrepancies in the fitted energy levels of the configurations 3dN of transition metal ions are ascribed to effective three-electron magnetic operators yi. Surprisingly it has been found that, of the 16 possible operators with ranks 1 in both spin and orbital spaces, four operators labeled by the irreducible representation (irrep) (11) of SO(5) are sufficient to obtain results which appear to be limited by the errors in the experimental energy levels. An interpretation is given involving products of operators labeled by the irreps of SO(5) and the symplectic group Sp(10).

  18. Linewidth-tolerant real-time 40-Gbit/s 16-QAM self-homodyne detection using a pilot carrier and ISI suppression based on electronic digital processing.

    PubMed

    Nakamura, Moriya; Kamio, Yukiyoshi; Miyazaki, Tetsuya

    2010-01-01

    We experimentally demonstrate linewidth-tolerant real-time 40-Gbit/s(10-Gsymbol/s) 16-quadrature amplitude modulation. We achieved bit-error rates of <10(-9) using an external-cavity laser diode with a linewidth of 200 kHz and <10(-7) using a distributed-feedback laser diode with a linewidth of 30 MHz, thanks to the phase-noise canceling capability provided by self-homodyne detection using a pilot carrier. Pre-equalization based on digital signal processing was employed to suppress intersymbol interference caused by the limited-frequency bandwidth of electrical components.

  19. Error diffusion concept for multi-level quantization

    NASA Astrophysics Data System (ADS)

    Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof

    1990-11-01

    The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.

  20. Systematic ionospheric electron density tilts (SITs) at mid-latitudes and their associated HF bearing errors

    NASA Astrophysics Data System (ADS)

    Tedd, B. L.; Strangeways, H. J.; Jones, T. B.

    1985-11-01

    Systematic ionospheric tilts (SITs) at midlatitudes and the diurnal variation of bearing error for different transmission paths are examined. An explanation of diurnal variations of bearing error based on the dependence of ionospheric tilt on solar zenith angle and plasma transport processes is presented. The effect of vertical ion drift and the momentum transfer of neutral winds is investigated. During the daytime the transmissions are low and photochemical processes control SITs; however, at night transmissions are at higher heights and spatial and temporal variations of plasma transport processes influence SITs. A HF ray tracing technique which uses a three-dimensional ionospheric model based on predictions to simulate SIT-induced bearing errors is described; poor correlation with experimental data is observed and the causes for this are studied. A second model based on measured vertical-sounder data is proposed. Model two is applicable for predicting bearing error for a range of transmission paths and correlates well with experimental data.

  1. Experimental investigation of correlation between fading and glint for aircraft targets

    NASA Astrophysics Data System (ADS)

    Wallin, C. M.; Aas, B.

    The correlation between the fading and glint of aircraft targets is investigated experimentally using a conventional amplitude comparison three-channel monopulse radar operating in the Ku-band. A significant correlation is found between the RCS and the variance of the angle error signals; this correlation seems to be independent of the aspect angle. The correlation between the RCS and the angle error signals themselves, however, is found to be very small.

  2. Application of Adaptive Neuro-Fuzzy Inference System for Prediction of Neutron Yield of IR-IECF Facility in High Voltages

    NASA Astrophysics Data System (ADS)

    Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.

    2013-09-01

    This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.

  3. Extinction measurements with low-power hsrl systems—error limits

    NASA Astrophysics Data System (ADS)

    Eloranta, Ed

    2018-04-01

    HSRL measurements of extinction are more difficult than backscatter measurements. This is particularly true for low-power, eye-safe systems. This paper looks at error sources that currently provide an error limit of 10-5 m-1 for boundary layer extinction measurements made with University of Wisconsin HSRL systems. These eye-safe systems typically use 300mW transmitters and 40 cm diameter receivers with a 10-4 radian field-of-view.

  4. Effect of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths.

    PubMed

    Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S

    2009-11-01

    Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.

  5. Hematocrit correction does not improve glucose monitor accuracy in the assessment of neonatal hypoglycemia.

    PubMed

    Wang, Li; Sievenpiper, John L; de Souza, Russell J; Thomaz, Michele; Blatz, Susan; Grey, Vijaylaxmi; Fusch, Christoph; Balion, Cynthia

    2013-08-01

    The lack of accuracy of point of care (POC) glucose monitors has limited their use in the diagnosis of neonatal hypoglycemia. Hematocrit plays an important role in explaining discordant results. The objective of this study was to to assess the effect of hematocrit on the diagnostic performance of Abbott Precision Xceed Pro (PXP) and Nova StatStrip (StatStrip) monitors in neonates. All blood samples ordered for laboratory glucose measurement were analyzed using the PXP and StatStrip and compared with the laboratory analyzer (ABL 800 Blood Gas analyzer [ABL]). Acceptable error targets were ±15% for glucose monitoring and ±5% for diagnosis. A total of 307 samples from 176 neonates were analyzed. Overall, 90% of StatStrip and 75% of PXP values met the 15% error limit and 45% of StatStrip and 32% of PXP values met the 5% error limit. At glucose concentrations ≤4 mmol/L, 83% of StatStrip and 79% of PXP values met the 15% error limit, while 37% of StatStrip and 38% of PXP values met the 5% error limit. Hematocrit explained 7.4% of the difference between the PXP and ABL whereas it accounted for only 0.09% of the difference between the StatStrip and ABL. The ROC analysis showed the screening cut point with the best performance for identifying neonatal hypoglycemia was 3.2 mmol/L for StatStrip and 3.3 mmol/L for PXP. Despite a negligible hematocrit effect for the StatStrip, it did not achieve recommended error limits. The StatStrip and PXP glucose monitors remain suitable only for neonatal hypoglycemia screening with confirmation required from a laboratory analyzer.

  6. The Frame Constraint on Experimentally Elicited Speech Errors in Japanese

    ERIC Educational Resources Information Center

    Saito, Akie; Inoue, Tomoyoshi

    2017-01-01

    The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the…

  7. Characterizing new compositions of [001]C relaxor ferroelectric single crystals using a work-energy model

    NASA Astrophysics Data System (ADS)

    Gallagher, John A.

    2016-04-01

    The desired operating range of ferroelectric materials with compositions near the morphotropic phase boundary is limited by field induced phase transformations. In [001]C cut and poled relaxor ferroelectric single crystals the mechanically driven ferroelectric rhombohedral to ferroelectric orthorhombic phase transformation is hindered by antagonistic electrical loading. Instability around the phase transformation makes the current experimental technique for characterization of the large field behavior very time consuming. Characterization requires specialized equipment and involves an extensive set of measurements under combined electrical, mechanical, and thermal loads. In this work a mechanism-based model is combined with a more limited set of experiments to obtain the same results. The model utilizes a work-energy criterion that calculates the mechanical work required to induce the transformation and the required electrical work that is removed to reverse the transformation. This is done by defining energy barriers to the transformation. The results of the combined experiment and modeling approach are compared to the fully experimental approach and error is discussed. The model shows excellent predictive capability and is used to substantially reduce the total number of experiments required for characterization. This decreases the time and resources required for characterization of new compositions.

  8. Optical Communication with Semiconductor Laser Diode. Interim Progress Report. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Davidson, Frederic; Sun, Xiaoli

    1989-01-01

    Theoretical and experimental performance limits of a free-space direct detection optical communication system were studied using a semiconductor laser diode as the optical transmitter and a silicon avalanche photodiode (APD) as the receiver photodetector. Optical systems using these components are under consideration as replacements for microwave satellite communication links. Optical pulse position modulation (PPM) was chosen as the signal format. An experimental system was constructed that used an aluminum gallium arsenide semiconductor laser diode as the transmitter and a silicon avalanche photodiode photodetector. The system used Q=4 PPM signaling at a source data rate of 25 megabits per second. The PPM signal format requires regeneration of PPM slot clock and word clock waveforms in the receiver. A nearly exact computational procedure was developed to compute receiver bit error rate without using the Gaussion approximation. A transition detector slot clock recovery system using a phase lock loop was developed and implemented. A novel word clock recovery system was also developed. It was found that the results of the nearly exact computational procedure agreed well with actual measurements of receiver performance. The receiver sensitivity achieved was the closest to the quantum limit yet reported for an optical communication system of this type.

  9. The detection of problem analytes in a single proficiency test challenge in the absence of the Health Care Financing Administration rule violations.

    PubMed

    Cembrowski, G S; Hackney, J R; Carey, N

    1993-04-01

    The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.

  10. Optimization of multimagnetometer systems on a spacecraft

    NASA Technical Reports Server (NTRS)

    Neubauer, F. M.

    1975-01-01

    The problem of optimizing the position of magnetometers along a boom of given length to yield a minimized total error is investigated. The discussion is limited to at most four magnetometers, which seems to be a practical limit due to weight, power, and financial considerations. The outlined error analysis is applied to some illustrative cases. The optimal magnetometer locations, for which the total error is minimum, are computed for given boom length, instrument errors, and very conservative magnetic field models characteristic for spacecraft with only a restricted or ineffective magnetic cleanliness program. It is shown that the error contribution by the magnetometer inaccuracy is increased as the number of magnetometers is increased, whereas the spacecraft field uncertainty is diminished by an appreciably larger amount.

  11. Investigation of experimental pole-figure errors by simulation of individual spectra

    NASA Astrophysics Data System (ADS)

    Lychagina, T. A.; Nikolaev, D. I.

    2007-09-01

    The errors in measuring the crystallographic texture described by pole figures are studied. A set of diffraction spectra for a sample of the MA2-1 alloy (Mg + 4.5% Al + 1% Zn) are measured, simulation of individual spectra on the basis of which the pole figures were obtained is performed, and their errors are determined. The conclusion about the possibility of determining the effect of errors of the diffraction peak half-width on the pole figure errors that was drawn in our previous studies is confirmed.

  12. Computational Investigation of In-Flight Temperature in Shaped Charge Jets and Explosively Formed Penetrators

    NASA Astrophysics Data System (ADS)

    Sable, Peter; Helminiak, Nathaniel; Harstad, Eric; Gullerud, Arne; Hollenshead, Jeromy; Hertel, Eugene; Sandia National Laboratories Collaboration; Marquette University Collaboration

    2017-06-01

    With the increasing use of hydrocodes in modeling and system design, experimental benchmarking of software has never been more important. While this has been a large area of focus since the inception of computational design, comparisons with temperature data are sparse due to experimental limitations. A novel temperature measurement technique, magnetic diffusion analysis, has enabled the acquisition of in-flight temperature measurements of hyper velocity projectiles. Using this, an AC-14 bare shaped charge and an LX-14 EFP, both with copper linings, were simulated using CTH to benchmark temperature against experimental results. Particular attention was given to the slug temperature profiles after separation, and the effect of varying equation-of-state and strength models. Simulations are in agreement with experimental, attaining better than 2% error between observed shaped charge temperatures. This varied notably depending on the strength model used. Similar observations were made simulating the EFP case, with a minimum 4% deviation. Jet structures compare well with radiographic images and are consistent with ALEGRA simulations previously conducted. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  13. Modeling and validation of autoinducer-mediated bacterial gene expression in microfluidic environments

    PubMed Central

    Austin, Caitlin M.; Stoy, William; Su, Peter; Harber, Marie C.; Bardill, J. Patrick; Hammer, Brian K.; Forest, Craig R.

    2014-01-01

    Biosensors exploiting communication within genetically engineered bacteria are becoming increasingly important for monitoring environmental changes. Currently, there are a variety of mathematical models for understanding and predicting how genetically engineered bacteria respond to molecular stimuli in these environments, but as sensors have miniaturized towards microfluidics and are subjected to complex time-varying inputs, the shortcomings of these models have become apparent. The effects of microfluidic environments such as low oxygen concentration, increased biofilm encapsulation, diffusion limited molecular distribution, and higher population densities strongly affect rate constants for gene expression not accounted for in previous models. We report a mathematical model that accurately predicts the biological response of the autoinducer N-acyl homoserine lactone-mediated green fluorescent protein expression in reporter bacteria in microfluidic environments by accommodating these rate constants. This generalized mass action model considers a chain of biomolecular events from input autoinducer chemical to fluorescent protein expression through a series of six chemical species. We have validated this model against experimental data from our own apparatus as well as prior published experimental results. Results indicate accurate prediction of dynamics (e.g., 14% peak time error from a pulse input) and with reduced mean-squared error with pulse or step inputs for a range of concentrations (10 μM–30 μM). This model can help advance the design of genetically engineered bacteria sensors and molecular communication devices. PMID:25379076

  14. Inter-satellite links for satellite autonomous integrity monitoring

    NASA Astrophysics Data System (ADS)

    Rodríguez-Pérez, Irma; García-Serrano, Cristina; Catalán Catalán, Carlos; García, Alvaro Mozo; Tavella, Patrizia; Galleani, Lorenzo; Amarillo, Francisco

    2011-01-01

    A new integrity monitoring mechanisms to be implemented on-board on a GNSS taking advantage of inter-satellite links has been introduced. This is based on accurate range and Doppler measurements not affected neither by atmospheric delays nor ground local degradation (multipath and interference). By a linear combination of the Inter-Satellite Links Observables, appropriate observables for both satellite orbits and clock monitoring are obtained and by the proposed algorithms it is possible to reduce the time-to-alarm and the probability of undetected satellite anomalies.Several test cases have been run to assess the performances of the new orbit and clock monitoring algorithms in front of a complete scenario (satellite-to-satellite and satellite-to-ground links) and in a satellite-only scenario. The results of this experimentation campaign demonstrate that the Orbit Monitoring Algorithm is able to detect orbital feared events when the position error at the worst user location is still under acceptable limits. For instance, an unplanned manoeuvre in the along-track direction is detected (with a probability of false alarm equals to 5 × 10-9) when the position error at the worst user location is 18 cm. The experimentation also reveals that the clock monitoring algorithm is able to detect phase jumps, frequency jumps and instability degradation on the clocks but the latency of detection as well as the detection performances strongly depends on the noise added by the clock measurement system.

  15. Quantum chemical modeling of zeolite-catalyzed methylation reactions: toward chemical accuracy for barriers.

    PubMed

    Svelle, Stian; Tuma, Christian; Rozanska, Xavier; Kerber, Torsten; Sauer, Joachim

    2009-01-21

    The methylation of ethene, propene, and t-2-butene by methanol over the acidic microporous H-ZSM-5 catalyst has been investigated by a range of computational methods. Density functional theory (DFT) with periodic boundary conditions (PBE functional) fails to describe the experimentally determined decrease of apparent energy barriers with the alkene size due to inadequate description of dispersion forces. Adding a damped dispersion term expressed as a parametrized sum over atom pair C(6) contributions leads to uniformly underestimated barriers due to self-interaction errors. A hybrid MP2:DFT scheme is presented that combines MP2 energy calculations on a series of cluster models of increasing size with periodic DFT calculations, which allows extrapolation to the periodic MP2 limit. Additionally, errors caused by the use of finite basis sets, contributions of higher order correlation effects, zero-point vibrational energy, and thermal contributions to the enthalpy were evaluated and added to the "periodic" MP2 estimate. This multistep approach leads to enthalpy barriers at 623 K of 104, 77, and 48 kJ/mol for ethene, propene, and t-2-butene, respectively, which deviate from the experimentally measured values by 0, +13, and +8 kJ/mol. Hence, enthalpy barriers can be calculated with near chemical accuracy, which constitutes significant progress in the quantum chemical modeling of reactions in heterogeneous catalysis in general and microporous zeolites in particular.

  16. Chemical and Thermodynamic Properties at High Temperatures: A Symposium

    NASA Technical Reports Server (NTRS)

    Walker, Raymond F.

    1961-01-01

    This book contains the program and all available abstracts of the 90' invited and contributed papers to be presented at the TUPAC Symposium on Chemical and Thermodynamic Properties at High Temperatures. The Symposium will be held in conjunction with the XVIIIth IUPAC Congress, Montreal, August 6 - 12, 1961. It has been organized, by the Subcommissions on Condensed States and on Gaseous States of the Commission on High Temperatures and Refractories and by the Subcommission on Experimental Thermodynamics of the Commission on Chemical Thermodynamics, acting in conjunction with the Organizing Committee of the IUPAC Congress. All inquiries concerning participation In the Symposium should be directed to: Secretary, XVIIIth International Congress of Pure and Applied Chemistry, National Research Council, Ottawa, 'Canada. Owing to the limited time and facilities available for the preparation and printing of the book, it has not been possible to refer the proofs of the abstracts to the authors for checking. Furthermore, it has not been possible to subject the manuscripts to a very thorough editorial examination. Some obvious errors in the manuscripts have been corrected; other errors undoubtedly have been introduced. Figures have been redrawn only when such a step was essential for reproduction purposes. Sincere apologies are offered to authors and readers for any errors which remain; however, in the circumstances neither the IUPAC Commissions who organized the Symposium, nor the U. S. Government Agencies who assisted in the preparation of this book can accept responsibility for the errors.

  17. Effects of various experimental parameters on errors in triangulation solution of elongated object in space. [barium ion cloud

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1975-01-01

    The effects of various experimental parameters on the displacement errors in the triangulation solution of an elongated object in space due to pointing uncertainties in the lines of sight have been determined. These parameters were the number and location of observation stations, the object's location in latitude and longitude, and the spacing of the input data points on the azimuth-elevation image traces. The displacement errors due to uncertainties in the coordinates of a moving station have been determined as functions of the number and location of the stations. The effects of incorporating the input data from additional cameras at one of the stations were also investigated.

  18. Appropriate Objective Functions for Quantifying Iris Mechanical Properties Using Inverse Finite Element Modeling.

    PubMed

    Pant, Anup D; Dorairaj, Syril K; Amini, Rouzbeh

    2018-07-01

    Quantifying the mechanical properties of the iris is important, as it provides insight into the pathophysiology of glaucoma. Recent ex vivo studies have shown that the mechanical properties of the iris are different in glaucomatous eyes as compared to normal ones. Notwithstanding the importance of the ex vivo studies, such measurements are severely limited for diagnosis and preclude development of treatment strategies. With the advent of detailed imaging modalities, it is possible to determine the in vivo mechanical properties using inverse finite element (FE) modeling. An inverse modeling approach requires an appropriate objective function for reliable estimation of parameters. In the case of the iris, numerous measurements such as iris chord length (CL) and iris concavity (CV) are made routinely in clinical practice. In this study, we have evaluated five different objective functions chosen based on the iris biometrics (in the presence and absence of clinical measurement errors) to determine the appropriate criterion for inverse modeling. Our results showed that in the absence of experimental measurement error, a combination of iris CL and CV can be used as the objective function. However, with the addition of measurement errors, the objective functions that employ a large number of local displacement values provide more reliable outcomes.

  19. Experimental investigation of heat transfer coefficient of mini-channel PCHE (printed circuit heat exchanger)

    NASA Astrophysics Data System (ADS)

    Kwon, Dohoon; Jin, Lingxue; Jung, WooSeok; Jeong, Sangkwon

    2018-06-01

    Heat transfer coefficient of a mini-channel printed circuit heat exchanger (PCHE) with counter-flow configuration is investigated. The PCHE used in the experiments is two layered (10 channels per layer) and has the hydraulic diameter of 1.83 mm. Experiments are conducted under various cryogenic heat transfer conditions: single-phase, boiling and condensation heat transfer. Heat transfer coefficients of each experiments are presented and compared with established correlations. In the case of the single-phase experiment, empiricial correlation of modified Dittus-Boelter correlation was proposed, which predicts the experimental results with 5% error at Reynolds number range from 8500 to 17,000. In the case of the boiling experiment, film boiling phenomenon occurred dominantly due to large temperature difference between the hot side and the cold side fluids. Empirical correlation is proposed which predicts experimental results with 20% error at Reynolds number range from 2100 to 2500. In the case of the condensation experiment, empirical correlation of modified Akers correlation was proposed, which predicts experimental results with 10% error at Reynolds number range from 3100 to 6200.

  20. Metainference: A Bayesian inference method for heterogeneous systems.

    PubMed

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.

  1. Simulation and experimental research of 1MWe solar tower power plant in China

    NASA Astrophysics Data System (ADS)

    Yu, Qiang; Wang, Zhifeng; Xu, Ershu

    2016-05-01

    The establishment of a reliable simulation system for a solar tower power plant can greatly increase the economic and safety performance of the whole system. In this paper, a dynamic model of the 1MWe Solar Tower Power Plant at Badaling in Beijing is developed based on the "STAR-90" simulation platform, including the heliostat field, the central receiver system (water/steam), etc. The dynamic behavior of the global CSP plant can be simulated. In order to verify the validity of simulation system, a complete experimental process was synchronously simulated by repeating the same operating steps based on the simulation platform, including the locations and number of heliostats, the mass flow of the feed water, etc. According to the simulation and experimental results, some important parameters are taken out to make a deep comparison. The results show that there is good alignment between the simulations and the experimental results and that the error range can be acceptable considering the error of the models. In the end, a comprehensive and deep analysis on the error source is carried out according to the comparative results.

  2. Jumping to the wrong conclusions? An investigation of the mechanisms of reasoning errors in delusions.

    PubMed

    Jolley, Suzanne; Thompson, Claire; Hurley, James; Medin, Evelina; Butler, Lucy; Bebbington, Paul; Dunn, Graham; Freeman, Daniel; Fowler, David; Kuipers, Elizabeth; Garety, Philippa

    2014-10-30

    Understanding how people with delusions arrive at false conclusions is central to the refinement of cognitive behavioural interventions. Making hasty decisions based on limited data ('jumping to conclusions', JTC) is one potential causal mechanism, but reasoning errors may also result from other processes. In this study, we investigated the correlates of reasoning errors under differing task conditions in 204 participants with schizophrenia spectrum psychosis who completed three probabilistic reasoning tasks. Psychotic symptoms, affect, and IQ were also evaluated. We found that hasty decision makers were more likely to draw false conclusions, but only 37% of their reasoning errors were consistent with the limited data they had gathered. The remainder directly contradicted all the presented evidence. Reasoning errors showed task-dependent associations with IQ, affect, and psychotic symptoms. We conclude that limited data-gathering contributes to false conclusions but is not the only mechanism involved. Delusions may also be maintained by a tendency to disregard evidence. Low IQ and emotional biases may contribute to reasoning errors in more complex situations. Cognitive strategies to reduce reasoning errors should therefore extend beyond encouragement to gather more data, and incorporate interventions focused directly on these difficulties. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  3. On the validity of the basis set superposition error and complete basis set limit extrapolations for the binding energy of the formic acid dimer

    NASA Astrophysics Data System (ADS)

    Miliordos, Evangelos; Xantheas, Sotiris S.

    2015-03-01

    We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for De = - 16.1 ± 0.1 kcal/mol and for D0 = - 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of -14.22 ± 0.12 kcal/mol.

  4. Identifying Keystone Species in the Human Gut Microbiome from Metagenomic Timeseries Using Sparse Linear Regression

    PubMed Central

    Fisher, Charles K.; Mehta, Pankaj

    2014-01-01

    Human associated microbial communities exert tremendous influence over human health and disease. With modern metagenomic sequencing methods it is now possible to follow the relative abundance of microbes in a community over time. These microbial communities exhibit rich ecological dynamics and an important goal of microbial ecology is to infer the ecological interactions between species directly from sequence data. Any algorithm for inferring ecological interactions must overcome three major obstacles: 1) a correlation between the abundances of two species does not imply that those species are interacting, 2) the sum constraint on the relative abundances obtained from metagenomic studies makes it difficult to infer the parameters in timeseries models, and 3) errors due to experimental uncertainty, or mis-assignment of sequencing reads into operational taxonomic units, bias inferences of species interactions due to a statistical problem called “errors-in-variables”. Here we introduce an approach, Learning Interactions from MIcrobial Time Series (LIMITS), that overcomes these obstacles. LIMITS uses sparse linear regression with boostrap aggregation to infer a discrete-time Lotka-Volterra model for microbial dynamics. We tested LIMITS on synthetic data and showed that it could reliably infer the topology of the inter-species ecological interactions. We then used LIMITS to characterize the species interactions in the gut microbiomes of two individuals and found that the interaction networks varied significantly between individuals. Furthermore, we found that the interaction networks of the two individuals are dominated by distinct “keystone species”, Bacteroides fragilis and Bacteroided stercosis, that have a disproportionate influence on the structure of the gut microbiome even though they are only found in moderate abundance. Based on our results, we hypothesize that the abundances of certain keystone species may be responsible for individuality in the human gut microbiome. PMID:25054627

  5. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  6. Effects of Error Experience When Learning to Simulate Hypernasality

    ERIC Educational Resources Information Center

    Wong, Andus W.-K.; Tse, Andy C.-Y.; Ma, Estella P.-M.; Whitehill, Tara L.; Masters, Rich S. W.

    2013-01-01

    Purpose: The purpose of this study was to evaluate the effects of error experience on the acquisition of hypernasal speech. Method: Twenty-eight healthy participants were asked to simulate hypernasality in either an "errorless learning" condition (in which the possibility for errors was limited) or an "errorful learning"…

  7. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  8. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    PubMed

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.

  9. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    NASA Astrophysics Data System (ADS)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  10. Methods of automatic nucleotide-sequence analysis. Multicomponent spectrophotometric analysis of mixtures of nucleic acid components by a least-squares procedure

    PubMed Central

    Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.

    1965-01-01

    1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087

  11. Experimental power spectral density analysis for mid- to high-spatial frequency surface error control.

    PubMed

    Hoyo, Javier Del; Choi, Heejoo; Burge, James H; Kim, Geon-Hee; Kim, Dae Wook

    2017-06-20

    The control of surface errors as a function of spatial frequency is critical during the fabrication of modern optical systems. A large-scale surface figure error is controlled by a guided removal process, such as computer-controlled optical surfacing. Smaller-scale surface errors are controlled by polishing process parameters. Surface errors of only a few millimeters may degrade the performance of an optical system, causing background noise from scattered light and reducing imaging contrast for large optical systems. Conventionally, the microsurface roughness is often given by the root mean square at a high spatial frequency range, with errors within a 0.5×0.5  mm local surface map with 500×500 pixels. This surface specification is not adequate to fully describe the characteristics for advanced optical systems. The process for controlling and minimizing mid- to high-spatial frequency surface errors with periods of up to ∼2-3  mm was investigated for many optical fabrication conditions using the measured surface power spectral density (PSD) of a finished Zerodur optical surface. Then, the surface PSD was systematically related to various fabrication process parameters, such as the grinding methods, polishing interface materials, and polishing compounds. The retraceable experimental polishing conditions and processes used to produce an optimal optical surface PSD are presented.

  12. Fault-tolerant quantum error detection.

    PubMed

    Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher

    2017-10-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.

  13. Fast scattering simulation tool for multi-energy x-ray imaging

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Tabary, J.; Rebuffel, V.; Létang, J. M.; Freud, N.; Verger, L.

    2015-12-01

    A combination of Monte Carlo (MC) and deterministic approaches was employed as a means of creating a simulation tool capable of providing energy resolved x-ray primary and scatter images within a reasonable time interval. Libraries of Sindbad, a previously developed x-ray simulation software, were used in the development. The scatter simulation capabilities of the tool were validated through simulation with the aid of GATE and through experimentation by using a spectrometric CdTe detector. A simple cylindrical phantom with cavities and an aluminum insert was used. Cross-validation with GATE showed good agreement with a global spatial error of 1.5% and a maximum scatter spectrum error of around 6%. Experimental validation also supported the accuracy of the simulations obtained from the developed software with a global spatial error of 1.8% and a maximum error of around 8.5% in the scatter spectra.

  14. Extending the Measurement Range of AN Optical Surface Profiler.

    NASA Astrophysics Data System (ADS)

    Cochran, Eugene Rowland, III

    This dissertation investigates a method for extending the measurement range of an optical surface profiling instrument. The instrument examined in these experiments is a computer -controlled phase-modulated interference microscope. Because of its ability to measure surfaces with a high degree of vertical resolution as well as excellent lateral resolution, this instrument is one of the most favorable candidates for determining the microtopography of optical surfaces. However, the data acquired by the instrument are restricted to a finite lateral and vertical range. To overcome this restriction, the feasibility of a new testing technique is explored. By overlapping a series of collinear profiles the limited field of view of this instrument can be increased and profiles that contain longer surface wavelengths can be examined. This dissertation also presents a method to augment both the vertical and horizontal dynamic range of the surface profiler by combining multiple subapertures and two-wavelength techniques. The theory, algorithms, error sources, and limitations encountered when concatenating a number of profiles are presented. In particular, the effects of accumulated piston and tilt errors on a measurement are explored. Some practical considerations for implementation and integration into an existing system are presented. Experimental findings and results of Monte Carlo simulations are also studied to explain the effects of random noise, lateral position errors, and defocus across the CCD array on measurement results. These results indicate the extent to which the field of view of the profiler may be augmented. A review of current methods of measuring surface topography is included, to provide for a more coherent text, along with a summary of pertinent measurement parameters for surface characterization. This work concludes with recommendations for future work that would make subaperture -testing techniques more reliable for measuring the microsurface structure of a material over an extended region.

  15. Experimental analysis and modeling of melt growth processes

    NASA Astrophysics Data System (ADS)

    Müller, Georg

    2002-04-01

    Melt growth processes provide the basic crystalline materials for many applications. The research and development of crystal growth processes is therefore driven by the demands which arise from these specific applications; however, common goals include an increased uniformity of the relevant crystal properties at the micro- and macro-scale, a decrease of deleterious crystal defects, and an increase of crystal dimensions. As melt growth equipment and experimentation becomes more and more expensive, little room remains for improvements by trial and error procedures. A more successful strategy is to optimize the crystal growth process by a combined use of experimental process analysis and computer modeling. This will be demonstrated in this paper by several examples from the bulk growth of silicon, gallium arsenide, indium phosphide, and calcium fluoride. These examples also involve the most important melt growth techniques, crystal pulling (Czochralski methods) and vertical gradient freeze (Bridgman-type methods). The power and success of the above optimization strategy, however, is not limited only to the given examples but can be generalized and applied to many types of bulk crystal growth.

  16. Density functional study on redox energetics of LaMO{sub 3−δ} (M=Sc–Cu) perovskite-type oxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pishahang, Mehdi, E-mail: Mehdi.Pishahang@sintef.no; Erik Mohn, Chris; Stølen, Svein

    2016-01-15

    This study evaluates the redox energetics of LaMO{sub 3−δ} (M=Sc–Cu) perovskite-type oxides via generalized gradient approximation (GGA) to DFT. Two different approaches to redox energetics of oxygen deficient perovskites of strongly non-stoichiometric (δ=0.5) and dilute defect limits (δ→0) are studied. In the first approach the enthalpies of oxidation are calculated using the stoichiometric end-compounds of LaMO{sub 3} and LaMO{sub 2.5}. The most common structures for the reduced lanthanides and strontides similar to the ones experimentally reported for SrMnO{sub 2.5}, SrFeO{sub 2.5}, and LaNiO{sub 2.5} are considered. The second approach to the oxidation enthalpies termed (δ→0) follow the trend observed experimentally.more » This approach represents the experimental conditions of the measured oxygen enthalpies, and is hampered less by the artificial features due to spurious self-interaction errors in GGA.« less

  17. A practical limit to trials needed in one-person randomized controlled experiments.

    PubMed

    Alemi, Roshan; Alemi, Farrokh

    2007-01-01

    Recently in this journal, J. Olsson and colleagues suggested the use of factorial experimental designs to guide a patient's efforts to choose among multiple interventions. These authors argue that factorial design, where every possible combination of the interventions is tried, is superior to sequential trial and errors. Factorial design is efficient in identifying the effectiveness of interventions (factor effect). Most patients care only about feeling better and not why their conditions are improving. If the goal of the patient is to get better and not to estimate the factor effect, then no control groups are needed. In this article, we show a modification in the factorial design of experiments proposed by Olsson and colleagues where a full-factorial design is planned, but experimentation is stopped when the patient's condition improves. With this modification, the number of trials is radically fewer than those needed by factorial design. For example, a patient trying out 4 different interventions with a median probability of success of .50 is expected to need 2 trials before stopping the experimentation in comparison with 32 in a full-factorial design.

  18. Kalman filter based control for Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Petit, Cyril; Quiros-Pacheco, Fernando; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Fusco, Thierry

    2004-12-01

    Classical Adaptive Optics suffer from a limitation of the corrected Field Of View. This drawback has lead to the development of MultiConjugated Adaptive Optics. While the first MCAO experimental set-ups are presently under construction, little attention has been paid to the control loop. This is however a key element in the optimization process especially for MCAO systems. Different approaches have been proposed in recent articles for astronomical applications : simple integrator, Optimized Modal Gain Integrator and Kalman filtering. We study here Kalman filtering which seems a very promising solution. Following the work of Brice Leroux, we focus on a frequential characterization of kalman filters, computing a transfer matrix. The result brings much information about their behaviour and allows comparisons with classical controllers. It also appears that straightforward improvements of the system models can lead to static aberrations and vibrations filtering. Simulation results are proposed and analysed thanks to our frequential characterization. Related problems such as model errors, aliasing effect reduction or experimental implementation and testing of Kalman filter control loop on a simplified MCAO experimental set-up could be then discussed.

  19. Numerical and experimental study of a high port-density WDM optical packet switch architecture for data centers.

    PubMed

    Di Lucente, S; Luo, J; Centelles, R Pueyo; Rohit, A; Zou, S; Williams, K A; Dorren, H J S; Calabretta, N

    2013-01-14

    Data centers have to sustain the rapid growth of data traffic due to the increasing demand of bandwidth-hungry internet services. The current intra-data center fat tree topology causes communication bottlenecks in the server interaction process, power-hungry O-E-O conversions that limit the minimum latency and the power efficiency of these systems. In this paper we numerically and experimentally investigate an optical packet switch architecture with modular structure and highly distributed control that allow configuration times in the order of nanoseconds. Numerical results indicate that the candidate architecture scaled over 4000 ports, provides an overall throughput over 50 Tb/s and a packet loss rate below 10(-6) while assuring sub-microsecond latency. We present experimental results that demonstrate the feasibility of a 16x16 optical packet switch based on parallel 1x4 integrated optical cross-connect modules. Error-free operations can be achieved with 4 dB penalty while the overall energy consumption is of 66 pJ/b. Based on those results, we discuss feasibility to scale the architecture to a much larger port count.

  20. Determination of the structural properties of the aqueous electrolyte LiCl6H 2 O at the supercooled state using the Reverse Monte Carlo (RMC) simulation

    NASA Astrophysics Data System (ADS)

    ZIANE, M.; HABCHI, M.; DEROUICHE, A.; MESLI, S. M.; BENZOUINE, F.; KOTBI, M.

    2017-03-01

    A structural study of an aqueous electrolyte whose experimental results are available. It is a solution of A structural study of an aqueous electrolyte whose experimental results are available. It is a solution LiCl6H 2 O type at supercooled state (162K) contrasted with pure water at room temperature by means of Partial Distribution Functions (PDF) issue from neutron scattering technique. The aqueous electrolyte solution of the chloride lithium LiCl presents interesting properties which is studied by different methods at different concentration and thermodynamical states: This system possesses the property to become a glass through a metastable supercooled state when the temperature decreases. Based on these partial functions, the Reverse Monte Carlo method (RMC) computes radial correlation functions which allow exploring a number of structural features of the system. The purpose of the RMC is to produce a consistent configuration with the experimental data. They are usually the most important in the limit of systematic errors (of unknown distribution).

  1. Reduced Error-Related Activation in Two Anterior Cingulate Circuits Is Related to Impaired Performance in Schizophrenia

    ERIC Educational Resources Information Center

    Polli, Frida E.; Barton, Jason J. S.; Thakkar, Katharine N.; Greve, Douglas N.; Goff, Donald C.; Rauch, Scott L.; Manoach, Dara S.

    2008-01-01

    To perform well on any challenging task, it is necessary to evaluate your performance so that you can learn from errors. Recent theoretical and experimental work suggests that the neural sequellae of error commission in a dorsal anterior cingulate circuit index a type of contingency- or reinforcement-based learning, while activation in a rostral…

  2. Suppression of vapor cell temperature error for spin-exchange-relaxation-free magnetometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng

    2015-08-15

    This paper presents a method to reduce the vapor cell temperature error of the spin-exchange-relaxation-free (SERF) magnetometer. The fluctuation of cell temperature can induce variations of the optical rotation angle, resulting in a scale factor error of the SERF magnetometer. In order to suppress this error, we employ the variation of the probe beam absorption to offset the variation of the optical rotation angle. The theoretical discussion of our method indicates that the scale factor error introduced by the fluctuation of the cell temperature could be suppressed by setting the optical depth close to one. In our experiment, we adjustmore » the probe frequency to obtain various optical depths and then measure the variation of scale factor with respect to the corresponding cell temperature changes. Our experimental results show a good agreement with our theoretical analysis. Under our experimental condition, the error has been reduced significantly compared with those when the probe wavelength is adjusted to maximize the probe signal. The cost of this method is the reduction of the scale factor of the magnetometer. However, according to our analysis, it only has minor effect on the sensitivity under proper operating parameters.« less

  3. Analysis of Performance of Stereoscopic-Vision Software

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  4. Measurement method of rotation angle and clearance in intelligent spherical hinge

    NASA Astrophysics Data System (ADS)

    Hu, Penghao; Lu, Yichang; Chen, Shiyi; Hu, Yi; Zhu, Lianqing

    2018-06-01

    Precision ball hinges are widely applied in parallel mechanisms, robotics, and other areas, but their rotation orientation and angle cannot be obtained during passive motion. The simultaneous clearance error in a precision ball hinge’s motion also can not be determined. In this paper we propose an intelligent ball hinge (IBH) that can detect the rotation angle and moving clearance, based on our previous research results. The measurement model was optimized to promote measurement accuracy and resolution, and an optimal design for the IBH’s structure was determined. The experimental data showed that the measurement accuracy and resolution of the modified scheme were improved. Within  ±10° and  ±  20°, the average errors of the uniaxial measurements were 0.29° and 0.42°, respectively. The resolution of the measurements was 15″. The source of the measurement errors was analyzed through theory and experimental data and several key error sources were determined. A point capacitance model for measuring the clearance error is proposed, which is useful not only in compensating for the angle measurement error but also in realizing the motion clearance of an IBH in real-time.

  5. Gene Profiling in Experimental Models of Eye Growth: Clues to Myopia Pathogenesis

    PubMed Central

    Stone, Richard A.; Khurana, Tejvir S.

    2010-01-01

    To understand the complex regulatory pathways that underlie the development of refractive errors, expression profiling has evaluated gene expression in ocular tissues of well-characterized experimental models that alter postnatal eye growth and induce refractive errors. Derived from a variety of platforms (e.g. differential display, spotted microarrays or Affymetrix GeneChips), gene expression patterns are now being identified in species that include chicken, mouse and primate. Reconciling available results is hindered by varied experimental designs and analytical/statistical features. Continued application of these methods offers promise to provide the much-needed mechanistic framework to develop therapies to normalize refractive development in children. PMID:20363242

  6. Theoretical and experimental studies of error in square-law detector circuits

    NASA Technical Reports Server (NTRS)

    Stanley, W. D.; Hearn, C. P.; Williams, J. B.

    1984-01-01

    Square law detector circuits to determine errors from the ideal input/output characteristic function were investigated. The nonlinear circuit response is analyzed by a power series expansion containing terms through the fourth degree, from which the significant deviation from square law can be predicted. Both fixed bias current and flexible bias current configurations are considered. The latter case corresponds with the situation where the mean current can change with the application of a signal. Experimental investigations of the circuit arrangements are described. Agreement between the analytical models and the experimental results are established. Factors which contribute to differences under certain conditions are outlined.

  7. An evaluation of programmed treatment-integrity errors during discrete-trial instruction.

    PubMed

    Carroll, Regina A; Kodak, Tiffany; Fisher, Wayne W

    2013-01-01

    This study evaluated the effects of programmed treatment-integrity errors on skill acquisition for children with an autism spectrum disorder (ASD) during discrete-trial instruction (DTI). In Study 1, we identified common treatment-integrity errors that occur during academic instruction in schools. In Study 2, we simultaneously manipulated 3 integrity errors during DTI. In Study 3, we evaluated the effects of each of the 3 integrity errors separately on skill acquisition during DTI. Results showed that participants either demonstrated slower skill acquisition or did not acquire the target skills when instruction included treatment-integrity errors. © Society for the Experimental Analysis of Behavior.

  8. Composite Interval Mapping Based on Lattice Design for Error Control May Increase Power of Quantitative Trait Locus Detection.

    PubMed

    He, Jianbo; Li, Jijie; Huang, Zhongwen; Zhao, Tuanjie; Xing, Guangnan; Gai, Junyi; Guan, Rongzhan

    2015-01-01

    Experimental error control is very important in quantitative trait locus (QTL) mapping. Although numerous statistical methods have been developed for QTL mapping, a QTL detection model based on an appropriate experimental design that emphasizes error control has not been developed. Lattice design is very suitable for experiments with large sample sizes, which is usually required for accurate mapping of quantitative traits. However, the lack of a QTL mapping method based on lattice design dictates that the arithmetic mean or adjusted mean of each line of observations in the lattice design had to be used as a response variable, resulting in low QTL detection power. As an improvement, we developed a QTL mapping method termed composite interval mapping based on lattice design (CIMLD). In the lattice design, experimental errors are decomposed into random errors and block-within-replication errors. Four levels of block-within-replication errors were simulated to show the power of QTL detection under different error controls. The simulation results showed that the arithmetic mean method, which is equivalent to a method under random complete block design (RCBD), was very sensitive to the size of the block variance and with the increase of block variance, the power of QTL detection decreased from 51.3% to 9.4%. In contrast to the RCBD method, the power of CIMLD and the adjusted mean method did not change for different block variances. The CIMLD method showed 1.2- to 7.6-fold higher power of QTL detection than the arithmetic or adjusted mean methods. Our proposed method was applied to real soybean (Glycine max) data as an example and 10 QTLs for biomass were identified that explained 65.87% of the phenotypic variation, while only three and two QTLs were identified by arithmetic and adjusted mean methods, respectively.

  9. Acoustic evidence for phonologically mismatched speech errors.

    PubMed

    Gormley, Andrea

    2015-04-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.

  10. Metrics for Business Process Models

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

  11. Optical surface pressure measurements: Accuracy and application field evaluation

    NASA Astrophysics Data System (ADS)

    Bukov, A.; Mosharov, V.; Orlov, A.; Pesetsky, V.; Radchenko, V.; Phonov, S.; Matyash, S.; Kuzmin, M.; Sadovskii, N.

    1994-07-01

    Optical pressure measurement (OPM) is a new pressure measurement method rapidly developed in several aerodynamic research centers: TsAGI (Russia), Boeing, NASA, McDonnell Douglas (all USA), and DLR (Germany). Present level of OPM-method provides its practice as standard experimental method of aerodynamic investigations in definite application fields. Applications of OPM-method are determined mainly by its accuracy. The accuracy of OPM-method is determined by the errors of three following groups: (1) errors of the luminescent pressure sensor (LPS) itself, such as uncompensated temperature influence, photo degradation, temperature and pressure hysteresis, variation of the LPS parameters from point to point on the model surface, etc.; (2) errors of the measurement system, such as noise of the photodetector, nonlinearity and nonuniformity of the photodetector, time and temperature offsets, etc.; and (3) methodological errors, owing to displacement and deformation of the model in an airflow, a contamination of the model surface, scattering of the excitation and luminescent light from the model surface and test section walls, etc. OPM-method allows getting total error of measured pressure not less than 1 percent. This accuracy is enough to visualize the pressure field and allows determining total and distributed aerodynamic loads and solving some problems of local aerodynamic investigations at transonic and supersonic velocities. OPM is less effective at low subsonic velocities (M less than 0.4), and for precise measurements, for example, an airfoil optimization. Current limitations of the OPM-method are discussed on an example of the surface pressure measurements and calculations of the integral loads on the wings of canard-aircraft model. The pressure measurement system and data reduction methods used on these tests are also described.

  12. Polarizabilities and hyperpolarizabilities for the atoms Al, Si, P, S, Cl, and Ar: Coupled cluster calculations.

    PubMed

    Lupinetti, Concetta; Thakkar, Ajit J

    2005-01-22

    Accurate static dipole polarizabilities and hyperpolarizabilities are calculated for the ground states of the Al, Si, P, S, Cl, and Ar atoms. The finite-field computations use energies obtained with various ab initio methods including Moller-Plesset perturbation theory and the coupled cluster approach. Excellent agreement with experiment is found for argon. The experimental alpha for Al is likely to be in error. Only limited comparisons are possible for the other atoms because hyperpolarizabilities have not been reported previously for most of these atoms. Our recommended values of the mean dipole polarizability (in the order Al-Ar) are alpha/e(2)a(0) (2)E(h) (-1)=57.74, 37.17, 24.93, 19.37, 14.57, and 11.085 with an error estimate of +/-0.5%. The recommended values of the mean second dipole hyperpolarizability (in the order Al-Ar) are gamma/e(4)a(0) (4)E(h) (-3)=2.02 x 10(5), 4.31 x 10(4), 1.14 x 10(4), 6.51 x 10(3), 2.73 x 10(3), and 1.18 x 10(3) with an error estimate of +/-2%. Our recommended polarizability anisotropy values are Deltaalpha/e(2)a(0) (2)E(h) (-1)=-25.60, 8.41, -3.63, and 1.71 for Al, Si, S, and Cl respectively, with an error estimate of +/-1%. The recommended hyperpolarizability anisotropies are Deltagamma/e(4)a(0) (4)E(h) (-3)=-3.88 x 10(5), 4.16 x 10(4), -7.00 x 10(3), and 1.65 x 10(3) for Al, Si, S, and Cl, respectively, with an error estimate of +/-4%. (c) 2005 American Institute of Physics.

  13. Achievable accuracy of hip screw holding power estimation by insertion torque measurement.

    PubMed

    Erani, Paolo; Baleani, Massimiliano

    2018-02-01

    To ensure stability of proximal femoral fractures, the hip screw must firmly engage into the femoral head. Some studies suggested that screw holding power into trabecular bone could be evaluated, intraoperatively, through measurement of screw insertion torque. However, those studies used synthetic bone, instead of trabecular bone, as host material or they did not evaluate accuracy of predictions. We determined prediction accuracy, also assessing the impact of screw design and host material. We measured, under highly-repeatable experimental conditions, disregarding clinical procedure complexities, insertion torque and pullout strength of four screw designs, both in 120 synthetic and 80 trabecular bone specimens of variable density. For both host materials, we calculated the root-mean-square error and the mean-absolute-percentage error of predictions based on the best fitting model of torque-pullout data, in both single-screw and merged dataset. Predictions based on screw-specific regression models were the most accurate. Host material impacts on prediction accuracy: the replacement of synthetic with trabecular bone decreased both root-mean-square errors, from 0.54 ÷ 0.76 kN to 0.21 ÷ 0.40 kN, and mean-absolute-percentage errors, from 14 ÷ 21% to 10 ÷ 12%. However, holding power predicted on low insertion torque remained inaccurate, with errors up to 40% for torques below 1 Nm. In poor-quality trabecular bone, tissue inhomogeneities likely affect pullout strength and insertion torque to different extents, limiting the predictive power of the latter. This bias decreases when the screw engages good-quality bone. Under this condition, predictions become more accurate although this result must be confirmed by close in-vitro simulation of the clinical procedure. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Intertester agreement in refractive error measurements.

    PubMed

    Huang, Jiayan; Maguire, Maureen G; Ciner, Elise; Kulp, Marjean T; Quinn, Graham E; Orel-Bixler, Deborah; Cyert, Lynn A; Moore, Bruce; Ying, Gui-Shuang

    2013-10-01

    To determine the intertester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor and the SureSight Vision Screener. Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3 to 5 years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Intertester agreement between lay and nurse screeners was assessed for sphere, cylinder, and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean intertester difference (lay minus nurse) was compared between groups defined based on the child's age, cycloplegic refractive error, and the reading's confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Intereye correlation was accounted for in all analyses. The mean intertester differences (95% limits of agreement) were -0.04 (-1.63, 1.54) diopter (D) sphere, 0.00 (-0.52, 0.51) D cylinder, and -0.04 (1.65, 1.56) D SE for the Retinomax and 0.05 (-1.48, 1.58) D sphere, 0.01 (-0.58, 0.60) D cylinder, and 0.06 (-1.45, 1.57) D SE for the SureSight. For either instrument, the mean intertester differences in sphere and SE did not differ by the child's age, cycloplegic refractive error, or the reading's confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading's confidence number was below the manufacturer's recommended value. Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar intertester agreement in refractive error measurements independent of the child's age. Significant refractive error and a reading with low confidence number were associated with worse intertester agreement.

  15. Analyzing the errors of DFT approximations for compressed water systems

    NASA Astrophysics Data System (ADS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-07-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the clusters.

  16. Analyzing the errors of DFT approximations for compressed water systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfè, D.; London Centre for Nanotechnology, UCL, London WC1H 0AH; Thomas Young Centre, UCL, London WC1H 0AH

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed watermore » clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid and the clusters.« less

  17. Influence of non-ideal performance of lasers on displacement precision in single-grating heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Guochao; Xie, Xuedong; Yan, Shuhua

    2010-10-01

    Principle of the dual-wavelength single grating nanometer displacement measuring system, with a long range, high precision, and good stability, is presented. As a result of the nano-level high-precision displacement measurement, the error caused by a variety of adverse factors must be taken into account. In this paper, errors, due to the non-ideal performance of the dual-frequency laser, including linear error caused by wavelength instability and non-linear error caused by elliptic polarization of the laser, are mainly discussed and analyzed. On the basis of theoretical modeling, the corresponding error formulas are derived as well. Through simulation, the limit value of linear error caused by wavelength instability is 2nm, and on the assumption that 0.85 x T = , 1 Ty = of the polarizing beam splitter(PBS), the limit values of nonlinear-error caused by elliptic polarization are 1.49nm, 2.99nm, 4.49nm while the non-orthogonal angle is selected correspondingly at 1°, 2°, 3° respectively. The law of the error change is analyzed based on different values of Tx and Ty .

  18. [Efficacy of motivational interviewing for reducing medication errors in chronic patients over 65 years with polypharmacy: Results of a cluster randomized trial].

    PubMed

    Pérula de Torres, Luis Angel; Pulido Ortega, Laura; Pérula de Torres, Carlos; González Lama, Jesús; Olaya Caro, Inmaculada; Ruiz Moral, Roger

    2014-10-21

    To evaluate the effectiveness of an intervention based on motivational interviewing to reduce medication errors in chronic patients over 65 with polypharmacy. Cluster randomized trial that included doctors and nurses of 16 Primary Care centers and chronic patients with polypharmacy over 65 years. The professionals were assigned to the experimental or the control group using stratified randomization. Interventions consisted of training of professionals and revision of patient treatments, application of motivational interviewing in the experimental group and also the usual approach in the control group. The primary endpoint (medication error) was analyzed at individual level, and was estimated with the absolute risk reduction (ARR), relative risk reduction (RRR), number of subjects to treat (NNT) and by multiple logistic regression analysis. Thirty-two professionals were randomized (19 doctors and 13 nurses), 27 of them recruited 154 patients consecutively (13 professionals in the experimental group recruited 70 patients and 14 professionals recruited 84 patients in the control group) and completed 6 months of follow-up. The mean age of patients was 76 years (68.8% women). A decrease in the average of medication errors was observed along the period. The reduction was greater in the experimental than in the control group (F=5.109, P=.035). RRA 29% (95% confidence interval [95% CI] 15.0-43.0%), RRR 0.59 (95% CI:0.31-0.76), and NNT 3.5 (95% CI 2.3-6.8). Motivational interviewing is more efficient than the usual approach to reduce medication errors in patients over 65 with polypharmacy. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.

  19. A Quatro-Based 65-nm Flip-Flop Circuit for Soft-Error Resilience

    NASA Astrophysics Data System (ADS)

    Li, Y.-Q.; Wang, H.-B.; Liu, R.; Chen, L.; Nofal, I.; Shi, S.-T.; He, A.-L.; Guo, G.; Baeg, S. H.; Wen, S.-J.; Wong, R.; Chen, M.; Wu, Q.

    2017-06-01

    A flip-flop circuit hardened against soft errors is presented in this paper. This design is an improved version of Quatro for further enhanced soft-error resilience by integrating the guard-gate technique. The proposed design, as well as reference Quatro and regular flip-flops, was implemented and manufactured in a 65-nm CMOS bulk technology. Experimental characterization results of their alpha and heavy ions soft-error rates verified the superior hardening performance of the proposed design over the other two circuits.

  20. [Statistical Process Control (SPC) can help prevent treatment errors without increasing costs in radiotherapy].

    PubMed

    Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C

    2010-01-01

    Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.

  1. Heisenberg's error-disturbance relations: A joint measurement-based experimental test

    NASA Astrophysics Data System (ADS)

    Zhao, Yuan-Yuan; Kurzyński, Paweł; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can

    2017-04-01

    The original Heisenberg error-disturbance relation was recently shown to be not universally valid and two different approaches to reformulate it were proposed. The first one focuses on how the error and disturbance of two observables A and B depend on a particular quantum state. The second one asks how a joint measurement of A and B affects their eigenstates. Previous experiments focused on the first approach. Here we focus on the second one. First, we propose and implement an extendible method of quantum-walk-based joint measurements of noisy Pauli operators to test the error-disturbance relation for qubits introduced in the work of Busch et al. [Phys. Rev. A 89, 012129 (2014), 10.1103/PhysRevA.89.012129], where the polarization of the single photon, corresponding to a walker's auxiliary degree of freedom that is commonly known as a coin, undergoes a position- and time-dependent evolution. Then we formulate and experimentally test a universally valid state-dependent relation for three mutually unbiased observables. We therefore establish a method of testing error-disturbance relations.

  2. Strain gage measurement errors in the transient heating of structural components

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance

    1993-01-01

    Significant strain-gage errors may exist in measurements acquired in transient thermal environments if conventional correction methods are applied. Conventional correction theory was modified and a new experimental method was developed to correct indicated strain data for errors created in radiant heating environments ranging from 0.6 C/sec (1 F/sec) to over 56 C/sec (100 F/sec). In some cases the new and conventional methods differed by as much as 30 percent. Experimental and analytical results were compared to demonstrate the new technique. For heating conditions greater than 6 C/sec (10 F/sec), the indicated strain data corrected with the developed technique compared much better to analysis than the same data corrected with the conventional technique.

  3. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  4. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme

    NASA Astrophysics Data System (ADS)

    He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin

    2017-07-01

    In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).

  5. Pilot performance and workload using simulated GPS track angle error displays

    DOT National Transportation Integrated Search

    1995-01-01

    The effect on simulated GPS instrument approach performance and workload resulting from the addition of Track Angle Error (TAE) information to cockpit RNAV receiver displays in explicit analog form was studied experimentally (S display formats, 6 pil...

  6. Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, B. M.; Milligan, M.

    2011-03-01

    In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.

  7. Fault-tolerant quantum error detection

    PubMed Central

    Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher

    2017-01-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889

  8. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  9. LANDSAT/coastal processes

    NASA Technical Reports Server (NTRS)

    James, W. P. (Principal Investigator); Hill, J. M.; Bright, J. B.

    1977-01-01

    The author has identified the following significant results. Correlations between the satellite radiance values water color, Secchi disk visibility, turbidity, and attenuation coefficients were generally good. The residual was due to several factors including systematic errors in the remotely sensed data, errors, small time and space variations in the water quality measurements, and errors caused by experimental design. Satellite radiance values were closely correlated with the optical properties of the water.

  10. Particle image velocimetry measurements of Mach 3 turbulent boundary layers at low Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Brooks, J. M.; Gupta, A. K.; Smith, M. S.; Marineau, E. C.

    2018-05-01

    Particle image velocimetry (PIV) measurements of Mach 3 turbulent boundary layers (TBL) have been performed under low Reynolds number conditions, Re_τ =200{-}1000, typical of direct numerical simulations (DNS). Three reservoir pressures and three measurement locations create an overlap in parameter space at one research facility. This allows us to assess the effects of Reynolds number, particle response and boundary layer thickness separate from facility specific experimental apparatus or methods. The Morkovin-scaled streamwise fluctuating velocity profiles agree well with published experimental and numerical data and show a small standard deviation among the nine test conditions. The wall-normal fluctuating velocity profiles show larger variations which appears to be due to particle lag. Prior to the current study, no detailed experimental study characterizing the effect of Stokes number on attenuating wall-normal fluctuating velocities has been performed. A linear variation is found between the Stokes number ( St) and the relative error in wall-normal fluctuating velocity magnitude (compared to hot wire anemometry data from Klebanoff, Characteristics of Turbulence in a Boundary Layer with Zero Pressure Gradient. Tech. Rep. NACA-TR-1247, National Advisory Committee for Aeronautics, Springfield, Virginia, 1955). The relative error ranges from about 10% for St=0.26 to over 50% for St=1.06. Particle lag and spatial resolution are shown to act as low-pass filters on the fluctuating velocity power spectral densities which limit the measurable energy content. The wall-normal component appears more susceptible to these effects due to the flatter spectrum profile which indicates that there is additional energy at higher wave numbers not measured by PIV. The upstream inclination and spatial correlation extent of coherent turbulent structures agree well with published data including those using krypton tagging velocimetry (KTV) performed at the same facility.

  11. Validation of High-Resolution CFD Method for Slosh Damping Extraction of Baffled Tanks

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2016-01-01

    Determination of slosh damping is a very challenging task as there is no analytical solution. The damping physics involve the vorticity dissipation which requires the full solution of the nonlinear Navier-Stokes equations. As a result, previous investigations and knowledge were mainly carried out by extensive experimental studies. A Volume-Of-Fluid (VOF) based CFD program developed at NASA MSFC was applied to extract slosh damping in a baffled tank from the first principle. First, experimental data using water with subscale smooth wall tank were used as the baseline validation. CFD simulation was demonstrated to be capable of accurately predicting natural frequency and very low damping value from the smooth wall tank at different fill levels. The damping due to a ring baffle at different liquid fill levels from barrel section and into the upper dome was then investigated to understand the slosh damping physics due to the presence of a ring baffle. Based on this study, the Root-Mean-Square error of our CFD simulation in estimating slosh damping was less than 4.8%, and the maximum error was less than 8.5%. Scalability of subscale baffled tank test using water was investigated using the validated CFD tool, and it was found that unlike the smooth wall case, slosh damping with baffle is almost independent of the working fluid and it is reasonable to apply water test data to the full scale LOX tank when the damping from baffle is dominant. On the other hand, for the smooth wall, the damping value must be scaled according to the Reynolds number. Comparison of experimental data, CFD, with the classical and modified Miles equations for upper dome was made, and the limitations of these semi-empirical equations were identified.

  12. SKA weak lensing - III. Added value of multiwavelength synergies for the mitigation of systematics

    NASA Astrophysics Data System (ADS)

    Camera, Stefano; Harrison, Ian; Bonaldi, Anna; Brown, Michael L.

    2017-02-01

    In this third paper of a series on radio weak lensing for cosmology with the Square Kilometre Array, we scrutinize synergies between cosmic shear measurements in the radio and optical/near-infrared (IR) bands for mitigating systematic effects. We focus on three main classes of systematics: (I) experimental systematic errors in the observed shear; (II) signal contamination by intrinsic alignments and (III) systematic effects due to an incorrect modelling of non-linear scales. First, we show that a comprehensive, multiwavelength analysis provides a self-calibration method for experimental systematic effects, only implying <50 per cent increment on the errors on cosmological parameters. We also illustrate how the cross-correlation between radio and optical/near-IR surveys alone is able to remove residual systematics with variance as large as 10-5, I.e. the same order of magnitude of the cosmological signal. This also opens the possibility of using such a cross-correlation as a means to detect unknown experimental systematics. Secondly, we demonstrate that, thanks to polarization information, radio weak lensing surveys will be able to mitigate contamination by intrinsic alignments, in a way similar but fully complementary to available self-calibration methods based on position-shear correlations. Lastly, we illustrate how radio weak lensing experiments, reaching higher redshifts than those accessible to optical surveys, will probe dark energy and the growth of cosmic structures in regimes less contaminated by non-linearities in the matter perturbations. For instance, the higher redshift bins of radio catalogues peak at z ≃ 0.8-1, whereas their optical/near-IR counterparts are limited to z ≲ 0.5-0.7. This translates into having a cosmological signal 2-5 times less contaminated by non-linear perturbations.

  13. Modeling shifts in the rate and pattern of subthalamopallidal network activity during deep brain stimulation.

    PubMed

    Hahn, Philip J; McIntyre, Cameron C

    2010-06-01

    Deep brain stimulation (DBS) of the subthlamic nucleus (STN) represents an effective treatment for medically refractory Parkinson's disease; however, understanding of its effects on basal ganglia network activity remains limited. We constructed a computational model of the subthalamopallidal network, trained it to fit in vivo recordings from parkinsonian monkeys, and evaluated its response to STN DBS. The network model was created with synaptically connected single compartment biophysical models of STN and pallidal neurons, and stochastically defined inputs driven by cortical beta rhythms. A least mean square error training algorithm was developed to parameterize network connections and minimize error when compared to experimental spike and burst rates in the parkinsonian condition. The output of the trained network was then compared to experimental data not used in the training process. We found that reducing the influence of the cortical beta input on the model generated activity that agreed well with recordings from normal monkeys. Further, during STN DBS in the parkinsonian condition the simulations reproduced the reduction in GPi bursting found in existing experimental data. The model also provided the opportunity to greatly expand analysis of GPi bursting activity, generating three major predictions. First, its reduction was proportional to the volume of STN activated by DBS. Second, GPi bursting decreased in a stimulation frequency dependent manner, saturating at values consistent with clinically therapeutic DBS. And third, ablating STN neurons, reported to generate similar therapeutic outcomes as STN DBS, also reduced GPi bursting. Our theoretical analysis of stimulation induced network activity suggests that regularization of GPi firing is dependent on the volume of STN tissue activated and a threshold level of burst reduction may be necessary for therapeutic effect.

  14. Photometric method for determination of acidity constants through integral spectra analysis

    NASA Astrophysics Data System (ADS)

    Zevatskiy, Yuriy Eduardovich; Ruzanov, Daniil Olegovich; Samoylov, Denis Vladimirovich

    2015-04-01

    An express method for determination of acidity constants of organic acids, based on the analysis of the integral transmittance vs. pH dependence is developed. The integral value is registered as a photocurrent of photometric device simultaneously with potentiometric titration. The proposed method allows to obtain pKa using only simple and low-cost instrumentation. The optical part of the experimental setup has been optimized through the exclusion of the monochromator device. Thus it only takes 10-15 min to obtain one pKa value with the absolute error of less than 0.15 pH units. Application limitations and reliability of the method have been tested for a series of organic acids of various nature.

  15. Magnetically confined electron beam system for high resolution electron transmission-beam experiments

    NASA Astrophysics Data System (ADS)

    Lozano, A. I.; Oller, J. C.; Krupa, K.; Ferreira da Silva, F.; Limão-Vieira, P.; Blanco, F.; Muñoz, A.; Colmenares, R.; García, G.

    2018-06-01

    A novel experimental setup has been implemented to provide accurate electron scattering cross sections from molecules at low and intermediate impact energies (1-300 eV) by measuring the attenuation of a magnetically confined linear electron beam from a molecular target. High-resolution electron energy is achieved through confinement in a magnetic gas trap where electrons are cooled by successive collisions with N2. Additionally, we developed and present a method to correct systematic errors arising from energy and angular resolution limitations. The accuracy of the entire measurement procedure is validated by comparing the N2 total scattering cross section in the considered energy range with benchmark values available in the literature.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Hanfei; Huang, Xiaojing; Bouet, Nathalie

    In this article, we discuss misalignment-induced aberrations in a pair of crossed multilayer Laue lenses used for achieving a nanometer-scale x-ray point focus. We thoroughly investigate the impacts of two most important contributions, the orthogonality and the separation distance between two lenses. We find that misalignment in the orthogonality results in astigmatism at 45º and other inclination angles when coupled with a separation distance error. Theoretical explanation and experimental verification are provided. We show that to achieve a diffraction-limited point focus, accurate alignment of the azimuthal angle is required to ensure orthogonality between two lenses, and the required accuracy ismore » scaled with the ratio of the focus size to the aperture size.« less

  17. A variable-step-size robust delta modulator.

    NASA Technical Reports Server (NTRS)

    Song, C. L.; Garodnick, J.; Schilling, D. L.

    1971-01-01

    Description of an analytically obtained optimum adaptive delta modulator-demodulator configuration. The device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov-Gaussian source. The optimum system is compared, using computer simulations, with a linear delta modulator and an enhanced Abate delta modulator. In addition, the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented.

  18. Mental workload prediction based on attentional resource allocation and information processing.

    PubMed

    Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin

    2015-01-01

    Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.

  19. Analytical performance study of solar blind non-line-of-sight ultraviolet short-range communication links.

    PubMed

    Xu, Zhengyuan; Ding, Haipeng; Sadler, Brian M; Chen, Gang

    2008-08-15

    Motivated by recent advances in solid-state incoherent ultraviolet sources and solar blind detectors, we study communication link performance over a range of less than 1 km with a bit error rate (BER) below 10(-3) in solar blind non-line-of-sight situation. The widely adopted yet complex single scattering channel model is significantly simplified by means of a closed-form expression for tractable analysis. Path loss is given as a function of transceiver geometry as well as atmospheric scattering and attenuation and is compared with experimental data for model validation. The BER performance of a shot-noise-limited receiver under this channel model is demonstrated.

  20. Communication: Limitations of the stochastic quasi-steady-state approximation in open biochemical reaction networks

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Straube, Arthur V.; Grima, Ramon

    2011-11-01

    It is commonly believed that, whenever timescale separation holds, the predictions of reduced chemical master equations obtained using the stochastic quasi-steady-state approximation are in very good agreement with the predictions of the full master equations. We use the linear noise approximation to obtain a simple formula for the relative error between the predictions of the two master equations for the Michaelis-Menten reaction with substrate input. The reduced approach is predicted to overestimate the variance of the substrate concentration fluctuations by as much as 30%. The theoretical results are validated by stochastic simulations using experimental parameter values for enzymes involved in proteolysis, gluconeogenesis, and fermentation.

  1. General ultrafast pulse measurement using the cross-correlation single-shot sonogram technique.

    PubMed

    Reid, Derryck T; Garduno-Mejia, Jesus

    2004-03-15

    The cross-correlation single-shot sonogram technique offers exact pulse measurement and real-time pulse monitoring via an intuitive time-frequency trace whose shape and orientation directly indicate the spectral chirp of an ultrashort laser pulse. We demonstrate an algorithm that solves a fundamental limitation of the cross-correlation sonogram method, namely, that the time-gating operation is implemented using a replica of the measured pulse rather than the ideal delta-function-like pulse. Using a modified principal-components generalized projections algorithm, we experimentally show accurate pulse retrieval of an asymmetric double pulse, a case that is prone to systematic error when one is using the original sonogram retrieval algorithm.

  2. Determination of heat capacity of ionic liquid based nanofluids using group method of data handling technique

    NASA Astrophysics Data System (ADS)

    Sadi, Maryam

    2018-01-01

    In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.

  3. Experimental characterization of a 400 Gbit/s orbital angular momentum multiplexed free-space optical link over 120 m.

    PubMed

    Ren, Yongxiong; Wang, Zhe; Liao, Peicheng; Li, Long; Xie, Guodong; Huang, Hao; Zhao, Zhe; Yan, Yan; Ahmed, Nisar; Willner, Asher; Lavery, Martin P J; Ashrafi, Nima; Ashrafi, Solyman; Bock, Robert; Tur, Moshe; Djordjevic, Ivan B; Neifeld, Mark A; Willner, Alan E

    2016-02-01

    We experimentally demonstrate and characterize the performance of a 400-Gbit/s orbital angular momentum (OAM) multiplexed free-space optical link over 120 m on the roof of a building. Four OAM beams, each carrying a 100-Gbit/s quadrature-phase-shift-keyed channel are multiplexed and transmitted. We investigate the influence of channel impairments on the received power, intermodal crosstalk among channels, and system power penalties. Without laser tracking and compensation systems, the measured received power and crosstalk among OAM channels fluctuate by 4.5 dB and 5 dB, respectively, over 180 s. For a beam displacement of 2 mm that corresponds to a pointing error less than 16.7 μrad, the link bit error rates are below the forward error correction threshold of 3.8×10(-3) for all channels. Both experimental and simulation results show that power penalties increase rapidly when the displacement increases.

  4. Neurochemical enhancement of conscious error awareness.

    PubMed

    Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A

    2012-02-22

    How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.

  5. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  6. Estimation et validation des derivees de stabilite et controle du modele dynamique non-lineaire d'un drone a voilure fixe

    NASA Astrophysics Data System (ADS)

    Courchesne, Samuel

    Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a simulation model incorporating the estimated parameters is used to compare the behavior of states measured. Finally, a good correspondence between the results is demonstrated despite a limited number of flight data. Keywords: drone, identification, estimation, nonlinear, flight test, system, aerodynamic coefficient.

  7. Can a simple lumped parameter model simulate complex transit time distributions? Benchmarking experiments in a virtual watershed.

    NASA Astrophysics Data System (ADS)

    Wilusz, D. C.; Maxwell, R. M.; Buda, A. R.; Ball, W. P.; Harman, C. J.

    2016-12-01

    The catchment transit-time distribution (TTD) is the time-varying, probabilistic distribution of water travel times through a watershed. The TTD is increasingly recognized as a useful descriptor of a catchment's flow and transport processes. However, TTDs are temporally complex and cannot be observed directly at watershed scale. Estimates of TTDs depend on available environmental tracers (such as stable water isotopes) and an assumed model whose parameters can be inverted from tracer data. All tracers have limitations though, such as (typically) short periods of observation or non-conservative behavior. As a result, models that faithfully simulate tracer observations may nonetheless yield TTD estimates with significant errors at certain times and water ages, conditioned on the tracer data available and the model structure. Recent advances have shown that time-varying catchment TTDs can be parsimoniously modeled by the lumped parameter rank StorAge Selection (rSAS) model, in which an rSAS function relates the distribution of water ages in outflows to the composition of age-ranked water in storage. Like other TTD models, rSAS is calibrated and evaluated against environmental tracer data, and the relative influence of tracer-dependent and model-dependent error on its TTD estimates is poorly understood. The purpose of this study is to benchmark the ability of different rSAS formulations to simulate TTDs in a complex, synthetic watershed where the lumped model can be calibrated and directly compared to a virtually "true" TTD. This experimental design allows for isolation of model-dependent error from tracer-dependent error. The integrated hydrologic model ParFlow with SLIM-FAST particle tracking code is used to simulate the watershed and its true TTD. To add field intelligence, the ParFlow model is populated with over forty years of hydrometric and physiographic data from the WE-38 subwatershed of the USDA's Mahantango Creek experimental catchment in PA, USA. The results are intended to give practical insight into tradeoffs between rSAS model structure and skill, and define a new performance benchmark to which other transit time models can be compared.

  8. A Model of Self-Monitoring Blood Glucose Measurement Error.

    PubMed

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  9. Fluid dynamic design and experimental study of an aspirated temperature measurement platform used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei; Ding, Renhui

    2016-08-01

    Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors with a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.

  10. Fluid dynamic design and experimental study of an aspirated temperature measurement platform used in climate observation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jie, E-mail: yangjie396768@163.com; School of Atmospheric Physics, Nanjing University of Information Science and Technology, Nanjing 210044; Liu, Qingquan

    Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors withmore » a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.« less

  11. Nature of the refractive errors in rhesus monkeys (Macaca mulatta) with experimentally induced ametropias.

    PubMed

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L

    2010-08-23

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Nature of the Refractive Errors in Rhesus Monkeys (Macaca mulatta) with Experimentally Induced Ametropias

    PubMed Central

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.

    2010-01-01

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237

  13. Autonomous Control Modes and Optimized Path Guidance for Shipboard Landing in High Sea States

    DTIC Science & Technology

    2015-11-16

    a degraded visual environment, workload during the landing task begins to approach the limits of a human pilot’s capability. It is a similarly...Figure 2. Approach Trajectory ±4 ft landing error ±8 ft landing error ±12 ft landing error Flight Path -3000...heave and yaw axes. Figure 5. Open loop system generation ±4 ft landing error ±8 ft landing error ±12 ft landing error -10 -8 -6 -4 -2 0 2 4

  14. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    PubMed

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    In this study, we assessed the explicit reporting of medical errors in the electronic record. We looked for cases in which the provider explicitly stated that he or she or another provider had committed an error. The advantage of the technique is that it is not limited to a specific type of error. Our goals were to 1) measure the rate at which medical errors were documented in medical records, and 2) characterize the types of errors that were reported.

  15. Adaptive control system for pulsed megawatt klystrons

    DOEpatents

    Bolie, Victor W.

    1992-01-01

    The invention provides an arrangement for reducing waveform errors such as errors in phase or amplitude in output pulses produced by pulsed power output devices such as klystrons by generating an error voltage representing the extent of error still present in the trailing edge of the previous output pulse, using the error voltage to provide a stored control voltage, and applying the stored control voltage to the pulsed power output device to limit the extent of error in the leading edge of the next output pulse.

  16. Online and offline experimental techniques for polycyclic aromatic hydrocarbons recovery and measurement.

    PubMed

    Comandini, A; Malewicki, T; Brezinsky, K

    2012-03-01

    The implementation of techniques aimed at improving engine performance and reducing particulate matter (PM) pollutant emissions is strongly influenced by the limited understanding of the polycyclic aromatic hydrocarbons (PAH) formation chemistry, in combustion devices, that produces the PM emissions. New experimental results which examine the formation of multi-ring compounds are required. The present investigation focuses on two techniques for such an experimental examination by recovery of PAH compounds from a typical combustion oriented experimental apparatus. The online technique discussed constitutes an optimal solution but not always feasible approach. Nevertheless, a detailed description of a new online sampling system is provided which can serve as reference for future applications to different experimental set-ups. In comparison, an offline technique, which is sometimes more experimentally feasible but not necessarily optimal, has been studied in detail for the recovery of a variety of compounds with different properties, including naphthalene, biphenyl, and iodobenzene. The recovery results from both techniques were excellent with an error in the total carbon balance of around 10% for the online technique and an uncertainty in the measurement of the single species of around 7% for the offline technique. Although both techniques proved to be suitable for measurement of large PAH compounds, the online technique represents the optimal solution in view of the simplicity of the corresponding experimental procedure. On the other hand, the offline technique represents a valuable solution in those cases where the online technique cannot be implemented.

  17. Combustion Device Failures During Space Shuttle Main Engine Development

    NASA Technical Reports Server (NTRS)

    Goetz, Otto K.; Monk, Jan C.

    2005-01-01

    Major Causes: Limited Initial Materials Properties. Limited Structural Models - especially fatigue. Limited Thermal Models. Limited Aerodynamic Models. Human Errors. Limited Component Test. High Pressure. Complicated Control.

  18. The Use of Analog Track Angle Error Display for Improving Simulated GPS Approach Performance

    DOT National Transportation Integrated Search

    1995-08-01

    The effect of adding track angle error (TAE) information to general aviation aircraft cockpit displays used for GPS : nonprecision instrument approaches was studied experimentally. Six pilots flew 120 approaches in a Frasca 242 light : twin aircraft ...

  19. Metainference: A Bayesian inference method for heterogeneous systems

    PubMed Central

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300

  20. Analysis of error-correction constraints in an optical disk.

    PubMed

    Roberts, J D; Ryley, A; Jones, D M; Burke, D

    1996-07-10

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  1. Analysis of error-correction constraints in an optical disk

    NASA Astrophysics Data System (ADS)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  2. Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid

    NASA Technical Reports Server (NTRS)

    VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)

    1997-01-01

    The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).

  3. Demand forecasting of electricity in Indonesia with limited historical data

    NASA Astrophysics Data System (ADS)

    Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif

    2018-03-01

    Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).

  4. Hand-movement-based in-vehicle driver/front-seat passenger discrimination for centre console controls

    NASA Astrophysics Data System (ADS)

    Herrmann, Enrico; Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Successful user discrimination in a vehicle environment may yield a reduction of the number of switches, thus significantly reducing costs while increasing user convenience. The personalization of individual controls permits conditional passenger enable/driver disable and vice versa options which may yield safety improvement. The authors propose a prototypic optical sensing system based on hand movement segmentation in near-infrared image sequences implemented in an Audi A6 Avant. Analyzing the number of movements in special regions, the system recognizes the direction of the forearm and hand motion and decides whether driver or front-seat passenger touch a control. The experimental evaluation is performed independently for uniformly and non-uniformly illuminated video data as well as for the complete video data set which includes both subsets. The general test results in error rates of up to 14.41% FPR / 16.82% FNR and 17.61% FPR / 14.77% FNR for driver and passenger respectively. Finally, the authors discuss the causes of the most frequently occurring errors as well as the prospects and limitations of optical sensing for user discrimination in passenger compartments.

  5. An Accurate and Fault-Tolerant Target Positioning System for Buildings Using Laser Rangefinders and Low-Cost MEMS-Based MARG Sensors

    PubMed Central

    Zhao, Lin; Guan, Dongxue; Landry, René Jr.; Cheng, Jianhua; Sydorenko, Kostyantyn

    2015-01-01

    Target positioning systems based on MEMS gyros and laser rangefinders (LRs) have extensive prospects due to their advantages of low cost, small size and easy realization. The target positioning accuracy is mainly determined by the LR’s attitude derived by the gyros. However, the attitude error is large due to the inherent noises from isolated MEMS gyros. In this paper, both accelerometer/magnetometer and LR attitude aiding systems are introduced to aid MEMS gyros. A no-reset Federated Kalman Filter (FKF) is employed, which consists of two local Kalman Filters (KF) and a Master Filter (MF). The local KFs are designed by using the Direction Cosine Matrix (DCM)-based dynamic equations and the measurements from the two aiding systems. The KFs can estimate the attitude simultaneously to limit the attitude errors resulting from the gyros. Then, the MF fuses the redundant attitude estimates to yield globally optimal estimates. Simulation and experimental results demonstrate that the FKF-based system can improve the target positioning accuracy effectively and allow for good fault-tolerant capability. PMID:26512672

  6. Experimental and Computational Studies of the Kinetics of the Reaction of Atomic Hydrogen with Methanethiol.

    PubMed

    Kerr, Katherine E; Alecu, Ionut M; Thompson, Kristopher M; Gao, Yide; Marshall, Paul

    2015-07-16

    The overall rate constant for H + CH3SH has been studied over 296-1007 K in an Ar bath gas using the laser flash photolysis method at 193 nm. H atoms were generated from CH3SH and in some cases NH3. They were detected via time-resolved resonance fluorescence. The results are summarized as k = (3.45 ± 0.19) × 10(-11) cm(3) molecule(-1) s(-1) exp(-6.92 ± 0.16 kJ mol(-1)/RT) where the errors in the Arrhenius parameters are the statistical uncertainties at the 2σ level. Overall error limits of ±9% for k are proposed. In the overlapping temperature range there is very good agreement with the resonance fluorescence measurements of Wine et al. Ab initio data and transition state theory yield moderate accord with the total rate constant, but not with prior mass spectrometry measurements of the main product channels leading to CH3S + H2 and CH3 + H2S by Amano et al.

  7. A Novel Position Compensation Scheme for Cable-Pulley Mechanisms Used in Laparoscopic Surgical Robots

    PubMed Central

    Liang, Yunlei; Du, Zhijiang; Sun, Lining

    2017-01-01

    The tendon driven mechanism using a cable and pulley to transmit power is adopted by many surgical robots. However, backlash hysteresis objectively exists in cable-pulley mechanisms, and this nonlinear problem is a great challenge in precise position control during the surgical procedure. Previous studies mainly focused on the transmission characteristics of the cable-driven system and constructed transmission models under particular assumptions to solve nonlinear problems. However, these approaches are limited because the modeling process is complex and the transmission models lack general applicability. This paper presents a novel position compensation control scheme to reduce the impact of backlash hysteresis on the positioning accuracy of surgical robots’ end-effectors. In this paper, a position compensation scheme using a support vector machine based on feedforward control is presented to reduce the position tracking error. To validate the proposed approach, experimental validations are conducted on our cable-pulley system and comparative experiments are carried out. The results show remarkable improvements in the performance of reducing the positioning error for the use of the proposed scheme. PMID:28974011

  8. Thermal dye double indicator dilution measurement of lung water in man: comparison with gravimetric measurements.

    PubMed Central

    Mihm, F G; Feeley, T W; Jamieson, S W

    1987-01-01

    The thermal dye double indicator dilution technique for estimating lung water was compared with gravimetric analyses in nine human subjects who were organ donors. As observed in animal studies, the thermal dye measurement of extravascular thermal volume (EVTV) consistently overestimated gravimetric extravascular lung water (EVLW), the mean (SEM) difference being 3.43 (0.59) ml/kg. In eight of the nine subjects the EVTV -3.43 ml/kg would yield an estimate of EVLW that would be from 3.23 ml/kg under to 3.37 ml/kg over the actual value EVLW at the 95% confidence limits. Reproducibility, assessed with the standard error of the mean percentage, suggested that a 15% change in EVTV can be reliably detected with repeated measurements. One subject was excluded from analysis because the EVTV measurement grossly underestimated its actual EVLW. This error was associated with regional injury observed on gross examination of the lung. Experimental and clinical evidence suggest that the thermal dye measurement provides a reliable estimate of lung water in diffuse pulmonary oedema states. PMID:3616974

  9. Experimental and theoretical determination of sea-state bias in radar altimetry

    NASA Technical Reports Server (NTRS)

    Stewart, Robert H.

    1991-01-01

    The major unknown error in radar altimetry is due to waves on the sea surface which cause the mean radar-reflecting surface to be displaced from mean sea level. This is the electromagnetic bias. The primary motivation for the project was to understand the causes of the bias so that the error it produces in radar altimetry could be calculated and removed from altimeter measurements made from space by the Topex/Poseidon altimetric satellite. The goals of the project were: (1) observe radar scatter at vertical incidence using a simple radar on a platform for a wide variety of environmental conditions at the same time wind and wave conditions were measured; (2) calculate electromagnetic bias from the radar observations; (3) investigate the limitations of the present theory describing radar scatter at vertical incidence; (4) compare measured electromagnetic bias with bias calculated from theory using measurements of wind and waves made at the time of the radar measurements; and (5) if possible, extend the theory so bias can be calculated for a wider range of environmental conditions.

  10. Using pre-distorted PAM-4 signal and parallel resistance circuit to enhance the passive solar cell based visible light communication

    NASA Astrophysics Data System (ADS)

    Wang, Hao-Yu; Wu, Jhao-Ting; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung; Liao, Xin-Lan; Lin, Kun-Hsien; Wu, Wei-Liang; Chen, Yi-Yuan

    2018-01-01

    Using solar cell (or photovoltaic cell) for visible light communication (VLC) is attractive. Apart from acting as a VLC receiver (Rx), the solar cell can provide energy harvesting. This can be used in self-powered smart devices, particularly in the emerging ;Internet of Things (IoT); networks. Here, we propose and demonstrate for the first time using pre-distortion pulse-amplitude-modulation (PAM)-4 signal and parallel resistance circuit to enhance the transmission performance of solar cell Rx based VLC. Pre-distortion is a simple non-adaptive equalization technique that can significantly mitigate the slow charging and discharging of the solar cell. The equivalent circuit model of the solar cell and the operation of using parallel resistance to increase the bandwidth of the solar cell are discussed. By using the proposed schemes, the experimental results show that the data rate of the solar cell Rx based VLC can increase from 20 kbit/s to 1.25 Mbit/s (about 60 times) with the bit error-rate (BER) satisfying the 7% forward error correction (FEC) limit.

  11. Identification of widespread adenosine nucleotide binding in Mycobacterium tuberculosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ansong, Charles; Ortega, Corrie; Payne, Samuel H.

    The annotation of protein function is almost completely performed by in silico approaches. However, computational prediction of protein function is frequently incomplete and error prone. In Mycobacterium tuberculosis (Mtb), ~25% of all genes have no predicted function and are annotated as hypothetical proteins. This lack of functional information severely limits our understanding of Mtb pathogenicity. Current tools for experimental functional annotation are limited and often do not scale to entire protein families. Here, we report a generally applicable chemical biology platform to functionally annotate bacterial proteins by combining activity-based protein profiling (ABPP) and quantitative LC-MS-based proteomics. As an example ofmore » this approach for high-throughput protein functional validation and discovery, we experimentally annotate the families of ATP-binding proteins in Mtb. Our data experimentally validate prior in silico predictions of >250 ATPases and adenosine nucleotide-binding proteins, and reveal 73 hypothetical proteins as novel ATP-binding proteins. We identify adenosine cofactor interactions with many hypothetical proteins containing a diversity of unrelated sequences, providing a new and expanded view of adenosine nucleotide binding in Mtb. Furthermore, many of these hypothetical proteins are both unique to Mycobacteria and essential for infection, suggesting specialized functions in mycobacterial physiology and pathogenicity. Thus, we provide a generally applicable approach for high throughput protein function discovery and validation, and highlight several ways in which application of activity-based proteomics data can improve the quality of functional annotations to facilitate novel biological insights.« less

  12. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  13. For how long can we predict the weather? - Insights into atmospheric predictability from global convection-allowing simulations

    NASA Astrophysics Data System (ADS)

    Judt, Falko

    2017-04-01

    A tremendous increase in computing power has facilitated the advent of global convection-resolving numerical weather prediction (NWP) models. Although this technological breakthrough allows for the seamless prediction of weather from local to global scales, the predictability of multiscale weather phenomena in these models is not very well known. To address this issue, we conducted a global high-resolution (4-km) predictability experiment using the Model for Prediction Across Scales (MPAS), a state-of-the-art global NWP model developed at the National Center for Atmospheric Research. The goals of this experiment are to investigate error growth from convective to planetary scales and to quantify the intrinsic, scale-dependent predictability limits of atmospheric motions. The globally uniform resolution of 4 km allows for the explicit treatment of organized deep moist convection, alleviating grave limitations of previous predictability studies that either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. Error growth is analyzed within the context of an "identical twin" experiment setup: the error is defined as the difference between a 20-day long "nature run" and a simulation that was perturbed with small-amplitude noise, but is otherwise identical. It is found that in convectively active regions, errors grow by several orders of magnitude within the first 24 h ("super-exponential growth"). The errors then spread to larger scales and begin a phase of exponential growth after 2-3 days when contaminating the baroclinic zones. After 16 days, the globally averaged error saturates—suggesting that the intrinsic limit of atmospheric predictability (in a general sense) is about two weeks, which is in line with earlier estimates. However, error growth rates differ between the tropics and mid-latitudes as well as between the troposphere and stratosphere, highlighting that atmospheric predictability is a complex problem. The comparatively slower error growth in the tropics and in the stratosphere indicates that certain weather phenomena could potentially have longer predictability than currently thought.

  14. SU-E-T-144: Effective Analysis of VMAT QA Generated Trajectory Log Files for Medical Accelerator Predictive Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, CM; Baydush, AH; Nguyen, C

    Purpose: To determine the effectiveness of SPC analysis for a model predictive maintenance process that uses accelerator generated parameter and performance data contained in trajectory log files. Methods: Each trajectory file is decoded and a total of 131 axes positions are recorded (collimator jaw position, gantry angle, each MLC, etc.). This raw data is processed and either axis positions are extracted at critical points during the delivery or positional change over time is used to determine axis velocity. The focus of our analysis is the accuracy, reproducibility and fidelity of each axis. A reference positional trace of the gantry andmore » each MLC is used as a motion baseline for cross correlation (CC) analysis. A total of 494 parameters (482 MLC related) were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and parameter/system specifications. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: TG-142 and published analysis of VMAT delivery accuracy. Results: All errors introduced were detected. Synthetic positional errors of 2mm for collimator jaw and MLC carriage exceeded the chart limits. Gantry speed and each MLC speed are analyzed at two different points in the delivery. Simulated Gantry speed error (0.2 deg/sec) and MLC speed error (0.1 cm/sec) exceeded the speed chart limits. Gantry position error of 0.2 deg was detected by the CC maximum value charts. The MLC position error of 0.1 cm was detected by the CC maximum value location charts for every MLC. Conclusion: SPC I/MR evaluation of trajectory log file parameters may be effective in providing an early warning of performance degradation or component failure for medical accelerator systems.« less

  15. Resolution limits of ultrafast ultrasound localization microscopy

    NASA Astrophysics Data System (ADS)

    Desailly, Yann; Pierre, Juliette; Couture, Olivier; Tanter, Mickael

    2015-11-01

    As in other imaging methods based on waves, the resolution of ultrasound imaging is limited by the wavelength. However, the diffraction-limit can be overcome by super-localizing single events from isolated sources. In recent years, we developed plane-wave ultrasound allowing frame rates up to 20 000 fps. Ultrafast processes such as rapid movement or disruption of ultrasound contrast agents (UCA) can thus be monitored, providing us with distinct punctual sources that could be localized beyond the diffraction limit. We previously showed experimentally that resolutions beyond λ/10 can be reached in ultrafast ultrasound localization microscopy (uULM) using a 128 transducer matrix in reception. Higher resolutions are theoretically achievable and the aim of this study is to predict the maximum resolution in uULM with respect to acquisition parameters (frequency, transducer geometry, sampling electronics). The accuracy of uULM is the error on the localization of a bubble, considered a point-source in a homogeneous medium. The proposed model consists in two steps: determining the timing accuracy of the microbubble echo in radiofrequency data, then transferring this time accuracy into spatial accuracy. The simplified model predicts a maximum resolution of 40 μm for a 1.75 MHz transducer matrix composed of two rows of 64 elements. Experimental confirmation of the model was performed by flowing microbubbles within a 60 μm microfluidic channel and localizing their blinking under ultrafast imaging (500 Hz frame rate). The experimental resolution, determined as the standard deviation in the positioning of the microbubbles, was predicted within 6 μm (13%) of the theoretical values and followed the analytical relationship with respect to the number of elements and depth. Understanding the underlying physical principles determining the resolution of superlocalization will allow the optimization of the imaging setup for each organ. Ultimately, accuracies better than the size of capillaries are achievable at several centimeter depths.

  16. Study on the calibration and optimization of double theodolites baseline

    NASA Astrophysics Data System (ADS)

    Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao

    2018-01-01

    For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.

  17. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  18. Conical Probe Calibration and Wind Tunnel Data Analysis of the Channeled Centerbody Inlet Experiment

    NASA Technical Reports Server (NTRS)

    Truong, Samson Siu

    2011-01-01

    For a multi-hole test probe undergoing wind tunnel tests, the resulting data needs to be analyzed for any significant trends. These trends include relating the pressure distributions, the geometric orientation, and the local velocity vector to one another. However, experimental runs always involve some sort of error. As a result, a calibration procedure is required to compensate for this error. For this case, it is the misalignment bias angles resulting from the distortion associated with the angularity of the test probe or the local velocity vector. Through a series of calibration steps presented here, the angular biases are determined and removed from the data sets. By removing the misalignment, smoother pressure distributions contribute to more accurate experimental results, which in turn could be then compared to theoretical and actual in-flight results to derive any similarities. Error analyses will also be performed to verify the accuracy of the calibration error reduction. The resulting calibrated data will be implemented into an in-flight RTF script that will output critical flight parameters during future CCIE experimental test runs. All of these tasks are associated with and in contribution to NASA Dryden Flight Research Center s F-15B Research Testbed s Small Business Innovation Research of the Channeled Centerbody Inlet Experiment.

  19. Illusory conjunctions reflect the time course of the attentional blink.

    PubMed

    Botella, Juan; Privado, Jesús; de Liaño, Beatriz Gil-Gómez; Suero, Manuel

    2011-07-01

    Illusory conjunctions in the time domain are binding errors for features from stimuli presented sequentially but in the same spatial position. A similar experimental paradigm is employed for the attentional blink (AB), an impairment of performance for the second of two targets when it is presented 200-500 msec after the first target. The analysis of errors along the time course of the AB allows the testing of models of illusory conjunctions. In an experiment, observers identified one (control condition) or two (experimental condition) letters in a specified color, so that illusory conjunctions in each response could be linked to specific positions in the series. Two items in the target colors (red and white, embedded in distractors of different colors) were employed in four conditions defined according to whether both targets were in the same or different colors. Besides the U-shaped function for hits, the errors were analyzed by calculating several response parameters reflecting characteristics such as the average position of the responses or the attentional suppression during the blink. The several error parameters cluster in two time courses, as would be expected from prevailing models of the AB. Furthermore, the results match the predictions from Botella, Barriopedro, and Suero's (Journal of Experimental Psychology: Human Perception and Performance, 27, 1452-1467, 2001) model for illusory conjunctions.

  20. Two-sample binary phase 2 trials with low type I error and low sample size

    PubMed Central

    Litwin, Samuel; Basickes, Stanley; Ross, Eric A.

    2017-01-01

    Summary We address design of two-stage clinical trials comparing experimental and control patients. Our end-point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p0 and alternative that it is p0 among controls and p1 > p0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E ≥ m, with two-sample rules of the form E – C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. PMID:28118686

  1. Magnetometer-enhanced personal locator for tunnels and GPS-denied outdoor environments

    NASA Astrophysics Data System (ADS)

    Kwanmuang, Surat; Ojeda, Lauro; Borenstein, Johann

    2011-06-01

    This paper describes recent advances with our earlier developed Personal Dead-reckoning (PDR) system for GPS-denied environments. The PDR system uses a foot-mounted Inertial Measurement Unit (IMU) that also houses a three axismagnetometer. In earlier work we developed methods for correcting the drift errors in the accelerometers, thereby allowing very accurate measurements of distance traveled. In addition, we developed a powerful heuristic method for correcting heading errors caused by gyro drift. The heuristics exploit the rectilinear features found in almost all manmade structures and therefore limit this technology to indoor use only. Most recently we integrated a three-axis magnetometer with the IMU, using a Kalman Filter. While it is well known that the ubiquitous magnetic disturbances found in most modern buildings render magnetometers almost completely useless indoors, these sensors are nonetheless very effective in pristine outdoor environments as well as in some tunnels and caves. The present paper describes the integrated magnetometer/IMU system and presents detailed experimental results. Specifically, the paper reports results of an objective test conducted by Firefighters of California's CAL-FIRE. In this particular test, two firefighters in full operational gear and one civilian hiked up a two-mile long mountain trail over rocky, sometimes steeply inclined terrain, each wearing one of our magnetometer-enhanced PDR systems but not using any GPS. During the hour-long hike the average position error was about 20 meters and the maximum error was less than 45 meters, which is about 1.4% of distance traveled for all three PDR systems.

  2. Alternating phase-shifting masks: phase determination and impact of quartz defects--theoretical and experimental results

    NASA Astrophysics Data System (ADS)

    Griesinger, Uwe A.; Dettmann, Wolfgang; Hennig, Mario; Heumann, Jan P.; Koehle, Roderick; Ludwig, Ralf; Verbeek, Martin; Zarrabian, Mardjan

    2002-07-01

    In optical lithography balancing the aerial image of an alternating phase shifting mask (alt. PSM) is a major challenge. For the exposure wavelengths (currently 248nm and 193nm) an optimum etching method is necessary to overcome imbalance effects. Defects play an important role in the imbalances of the aerial image. In this contribution defects will be discussed by using the methodology of global phase imbalance control also for local imbalances which are a result of quartz defects. The effective phase error can be determined with an AIMS-system by measuring the CD width between the images of deep- and shallow trenches at different focus settings. The AIMS results are analyzed in comparison to the simulated and lithographic print results of the alternating structures. For the analysis of local aerial image imbalances it is necessary to investigate the capability of detecting these phase defects with state of the art inspection systems. Alternating PSMs containing programmed defects were inspected with different algorithms to investigate the capture rate of special phase defects in dependence on the defect size. Besides inspection also repair of phase defects is an important task. In this contribution we show the effect of repair on the optical behavior of phase defects. Due to the limited accuracy of the repair tools the repaired area still shows a certain local phase error. This error can be caused either by residual quartz material or a substrate damage. The influence of such repair induced phase errors on the aerial image were investigated.

  3. Oxygen monitor for semi-closed rebreathers: design and use for estimating metabolic oxygen consumption

    NASA Astrophysics Data System (ADS)

    Clarke, John R.; Southerland, David

    1999-07-01

    Semi-closed circuit underwater breathing apparatus (UBA) provide a constant flow of mixed gas containing oxygen and nitrogen or helium to a diver. However, as a diver's work rate and metabolic oxygen consumption varies, the oxygen percentages within the UBA can change dramatically. Hence, even a resting diver can become hypoxic and become at risk for oxygen induced seizures. Conversely, a hard working diver can become hypoxic and lose consciousness. Unfortunately, current semi-closed UBA do not contain oxygen monitors. We describe a simple oxygen monitoring system designed and prototyped at the Navy Experimental Diving Unit. The main monitor components include a PIC microcontroller, analog-to-digital converter, bicolor LED, and oxygen sensor. The LED, affixed to the diver's mask is steady green if the oxygen partial pressure is within pre- defined acceptable limits. A more advanced monitor with a depth senor and additional computational circuitry could be used to estimate metabolic oxygen consumption. The computational algorithm uses the oxygen partial pressure and the diver's depth to compute O2 using the steady state solution of the differential equation describing oxygen concentrations within the UBA. Consequently, dive transients induce errors in the O2 estimation. To evalute these errors, we used a computer simulation of semi-closed circuit UBA dives to generate transient rich data as input to the estimation algorithm. A step change in simulated O2 elicits a monoexponential change in the estimated O2 with a time constant of 5 to 10 minutes. Methods for predicting error and providing a probable error indication to the diver are presented.

  4. An assessment of envelope-based demodulation in case of proximity of carrier and modulation frequencies

    NASA Astrophysics Data System (ADS)

    Shahriar, Md Rifat; Borghesani, Pietro; Randall, R. B.; Tan, Andy C. C.

    2017-11-01

    Demodulation is a necessary step in the field of diagnostics to reveal faults whose signatures appear as an amplitude and/or frequency modulation. The Hilbert transform has conventionally been used for the calculation of the analytic signal required in the demodulation process. However, the carrier and modulation frequencies must meet the conditions set by the Bedrosian identity for the Hilbert transform to be applicable for demodulation. This condition, basically requiring the carrier frequency to be sufficiently higher than the frequency of the modulation harmonics, is usually satisfied in many traditional diagnostic applications (e.g. vibration analysis of gear and bearing faults) due to the order-of-magnitude ratio between the carrier and modulation frequency. However, the diversification of the diagnostic approaches and applications shows cases (e.g. electrical signature analysis-based diagnostics) where the carrier frequency is in close proximity to the modulation frequency, thus challenging the applicability of the Bedrosian theorem. This work presents an analytic study to quantify the error introduced by the Hilbert transform-based demodulation when the Bedrosian identity is not satisfied and proposes a mitigation strategy to combat the error. An experimental study is also carried out to verify the analytical results. The outcome of the error analysis sets a confidence limit on the estimated modulation (both shape and magnitude) achieved through the Hilbert transform-based demodulation in case of violated Bedrosian theorem. However, the proposed mitigation strategy is found effective in combating the demodulation error aroused in this scenario, thus extending applicability of the Hilbert transform-based demodulation.

  5. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  6. Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis

    NASA Technical Reports Server (NTRS)

    Ghrist, Richard W.; Plakalovic, Dragan

    2012-01-01

    An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.

  7. Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources

    NASA Astrophysics Data System (ADS)

    Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.

    2011-05-01

    The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.

  8. Large Sample Confidence Limits for Goodman and Kruskal's Proportional Prediction Measure TAU-b

    ERIC Educational Resources Information Center

    Berry, Kenneth J.; Mielke, Paul W.

    1976-01-01

    A Fortran Extended program which computes Goodman and Kruskal's Tau-b, its asymmetrical counterpart, Tau-a, and three sets of confidence limits for each coefficient under full multinomial and proportional stratified sampling is presented. A correction of an error in the calculation of the large sample standard error of Tau-b is discussed.…

  9. The Influence of Methylphenidate on Hyperactivity and Attention Deficits in Children With ADHD: A Virtual Classroom Test.

    PubMed

    Mühlberger, A; Jekel, K; Probst, T; Schecklmann, M; Conzelmann, A; Andreatta, M; Rizzo, A A; Pauli, P; Romanos, M

    2016-05-13

    This study compares the performance in a continuous performance test within a virtual reality classroom (CPT-VRC) between medicated children with ADHD, unmedicated children with ADHD, and healthy children. N = 94 children with ADHD (n = 26 of them received methylphenidate and n = 68 were unmedicated) and n = 34 healthy children performed the CPT-VRC. Omission errors, reaction time/variability, commission errors, and body movements were assessed. Furthermore, ADHD questionnaires were administered and compared with the CPT-VRC measures. The unmedicated ADHD group exhibited more omission errors and showed slower reaction times than the healthy group. Reaction time variability was higher in the unmedicated ADHD group compared with both the healthy and the medicated ADHD group. Omission errors and reaction time variability were associated with inattentiveness ratings of experimenters. Head movements were correlated with hyperactivity ratings of parents and experimenters. Virtual reality is a promising technology to assess ADHD symptoms in an ecologically valid environment. © The Author(s) 2016.

  10. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells.

    PubMed

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-10-14

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.

  11. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells

    PubMed Central

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-01-01

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412

  12. The immediate effects of therapeutic keyboard music playing for finger training in adults undergoing hand rehabilitation.

    PubMed

    Zhang, Xiaoying; Liu, Songhuai; Yang, Degang; Du, Liangjie; Wang, Ziyuan

    2016-08-01

    [Purpose] The purpose of this study was to examine the immediate effects of therapeutic keyboard music playing on the finger function of subjects' hands through measurements of the joint position error test, surface electromyography, probe reaction time, and writing time. [Subjects and Methods] Ten subjects were divided randomly into experimental and control groups. The experimental group used therapeutic keyboard music playing and the control group used grip training. All subjects were assessed and evaluated by the joint position error test, surface electromyography, probe reaction time, and writing time. [Results] After accomplishing therapeutic keyboard music playing and grip training, surface electromyography of the two groups showed no significant change, but joint position error test, probe reaction time, and writing time obviously improved. [Conclusion] These results suggest that therapeutic keyboard music playing is an effective and novel treatment for improving joint position error test scores, probe reaction time, and writing time, and it should be promoted widely in clinics.

  13. Investigating Experimental Effects within the Framework of Structural Equation Modeling: An Example with Effects on Both Error Scores and Reaction Times

    ERIC Educational Resources Information Center

    Schweizer, Karl

    2008-01-01

    Structural equation modeling provides the framework for investigating experimental effects on the basis of variances and covariances in repeated measurements. A special type of confirmatory factor analysis as part of this framework enables the appropriate representation of the experimental effect and the separation of experimental and…

  14. A multi-frequency inverse-phase error compensation method for projector nonlinear in 3D shape measurement

    NASA Astrophysics Data System (ADS)

    Mao, Cuili; Lu, Rongsheng; Liu, Zhijian

    2018-07-01

    In fringe projection profilometry, the phase errors caused by the nonlinear intensity response of digital projectors needs to be correctly compensated. In this paper, a multi-frequency inverse-phase method is proposed. The theoretical model of periodical phase errors is analyzed. The periodical phase errors can be adaptively compensated in the wrapped maps by using a set of fringe patterns. The compensated phase is then unwrapped with multi-frequency method. Compared with conventional methods, the proposed method can greatly reduce the periodical phase error without calibrating measurement system. Some simulation and experimental results are presented to demonstrate the validity of the proposed approach.

  15. Research of laser echo signal simulator

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Shi, Rui; Wang, Xin; Li, Zhou

    2015-11-01

    Laser echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR. System model and time series model of laser echo signal simulator are established. Some influential factors which could induce fixed error and random error on the simulated return signals are analyzed, and then these system insertion errors are analyzed quantitatively. Using this theoretical model, the simulation system is investigated experimentally. The results corrected by subtracting fixed error indicate that the range error of the simulated laser return signal is less than 0.25m, and the distance range that the system can simulate is from 50m to 20km.

  16. Error Covariance Penalized Regression: A novel multivariate model combining penalized regression with multivariate error structure.

    PubMed

    Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C

    2018-06-29

    A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Baseline Error Analysis and Experimental Validation for Height Measurement of Formation Insar Satellite

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, T.; Zhang, X.; Geng, X.

    2018-04-01

    In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.

  18. The pros and cons of code validation

    NASA Technical Reports Server (NTRS)

    Bobbitt, Percy J.

    1988-01-01

    Computational and wind tunnel error sources are examined and quantified using specific calculations of experimental data, and a substantial comparison of theoretical and experimental results, or a code validation, is discussed. Wind tunnel error sources considered include wall interference, sting effects, Reynolds number effects, flow quality and transition, and instrumentation such as strain gage balances, electronically scanned pressure systems, hot film gages, hot wire anemometers, and laser velocimeters. Computational error sources include math model equation sets, the solution algorithm, artificial viscosity/dissipation, boundary conditions, the uniqueness of solutions, grid resolution, turbulence modeling, and Reynolds number effects. It is concluded that, although improvements in theory are being made more quickly than in experiments, wind tunnel research has the advantage of the more realistic transition process of a right turbulence model in a free-transition test.

  19. An accurate ab initio quartic force field for ammonia

    NASA Technical Reports Server (NTRS)

    Martin, J. M. L.; Lee, Timothy J.; Taylor, Peter R.

    1992-01-01

    The quartic force field of ammonia is computed using basis sets of spdf/spd and spdfg/spdf quality and an augmented coupled cluster method. After correcting for Fermi resonance, the computed fundamentals and nu 4 overtones agree on average to better than 3/cm with the experimental ones except for nu 2. The discrepancy for nu 2 is principally due to higher-order anharmonicity effects. The computed omega 1, omega 3, and omega 4 confirm the recent experimental determination by Lehmann and Coy (1988) but are associated with smaller error bars. The discrepancy between the computed and experimental omega 2 is far outside the expected error range, which is also attributed to higher-order anharmonicity effects not accounted for in the experimental determination. Spectroscopic constants are predicted for a number of symmetric and asymmetric top isotopomers of NH3.

  20. Development of advanced methods for analysis of experimental data in diffusion

    NASA Astrophysics Data System (ADS)

    Jaques, Alonso V.

    There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.

  1. Primer ID Validates Template Sampling Depth and Greatly Reduces the Error Rate of Next-Generation Sequencing of HIV-1 Genomic RNA Populations

    PubMed Central

    Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr

    2015-01-01

    ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize the recovery of sequences from input templates and to reduce resampling of the Primer ID so that appropriate multiplexing can be included in the experimental design. With the defined sampling depth and measured error rate, we are able to assign cutoffs for the accurate detection of minority variants in viral populations. This approach allows the power of NGS to be realized without having to guess about sampling depth or to ignore the problem of PCR resampling, while also being able to correct most of the errors in the data set. PMID:26041299

  2. Luminance-model-based DCT quantization for color image compression

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1992-01-01

    A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).

  3. Comparison of detection limit in fiber-based conventional, amplified, and gain-clamped cavity ring-down techniques

    NASA Astrophysics Data System (ADS)

    Sharma, K.; Abdul Khudus, M. I. M.; Alam, S. U.; Bhattacharya, S.; Venkitesh, D.; Brambilla, G.

    2018-01-01

    Relative performance and detection limit of conventional, amplified, and gain-clamped cavity ring-down techniques (CRDT) in all-fiber configurations are compared experimentally for the first time. Refractive index measurement using evanescent field in tapered fibers is used as a benchmark for the comparison. The systematic optimization of a nested-loop configuration in gain-clamped CRDT is also discussed, which is crucial for achieving a constant gain in a CRDT experiment. It is found that even though conventional CRDT has the lowest standard error in ring-down time (Δτ), the value of ring-down time (τ) is very small, thus leading to poor detection limit. Amplified CRDT provides an improvement in τ, albeit with two orders of magnitude higher Δτ due to amplifier noise. The nested-loop configuration in gain-clamped CRDT helps in reducing Δτ by an order of magnitude as compared to amplified CRDT whilst retaining the improvement in τ. A detection limit of 1 . 03 × 10-4 RIU at refractive index of 1.322 with a 3 mm long and 4.5 μm diameter tapered fiber is demonstrated with the gain-clamped CRDT.

  4. Using direct numerical simulation to improve experimental measurements of inertial particle radial relative velocities

    NASA Astrophysics Data System (ADS)

    Ireland, Peter J.; Collins, Lance R.

    2012-11-01

    Turbulence-induced collision of inertial particles may contribute to the rapid onset of precipitation in warm cumulus clouds. The particle collision frequency is determined from two parameters: the radial distribution function g (r) and the mean inward radial relative velocity . These quantities have been measured in three dimensions computationally, using direct numerical simulation (DNS), and experimentally, using digital holographic particle image velocimetry (DHPIV). While good quantitative agreement has been attained between computational and experimental measures of g (r) (Salazar et al. 2008), measures of wr have not reached that stage (de Jong et al. 2010). We apply DNS to mimic the experimental image analysis used in the relative velocity measurement. To account for experimental errors, we add noise to the particle positions and `measure' the velocity from these positions. Our DNS shows that the experimental errors are inherent to the DHPIV setup, and so we explore an alternate approach, in which velocities are measured along thin two-dimensional planes using standard PIV. We show that this technique better recovers the correct radial relative velocity PDFs and suggest optimal parameter ranges for the experimental measurements.

  5. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  6. Image overlay solution based on threshold detection for a compact near infrared fluorescence goggle system

    NASA Astrophysics Data System (ADS)

    Gao, Shengkui; Mondal, Suman B.; Zhu, Nan; Liang, RongGuang; Achilefu, Samuel; Gruev, Viktor

    2015-01-01

    Near infrared (NIR) fluorescence imaging has shown great potential for various clinical procedures, including intraoperative image guidance. However, existing NIR fluorescence imaging systems either have a large footprint or are handheld, which limits their usage in intraoperative applications. We present a compact NIR fluorescence imaging system (NFIS) with an image overlay solution based on threshold detection, which can be easily integrated with a goggle display system for intraoperative guidance. The proposed NFIS achieves compactness, light weight, hands-free operation, high-precision superimposition, and a real-time frame rate. In addition, the miniature and ultra-lightweight light-emitting diode tracking pod is easy to incorporate with NIR fluorescence imaging. Based on experimental evaluation, the proposed NFIS solution has a lower detection limit of 25 nM of indocyanine green at 27 fps and realizes a highly precise image overlay of NIR and visible images of mice in vivo. The overlay error is limited within a 2-mm scale at a 65-cm working distance, which is highly reliable for clinical study and surgical use.

  7. Robust Flutter Analysis for Aeroservoelastic Systems

    NASA Astrophysics Data System (ADS)

    Kotikalpudi, Aditya

    The dynamics of a flexible air vehicle are typically described using an aeroservoelastic model which accounts for interaction between aerodynamics, structural dynamics, rigid body dynamics and control laws. These subsystems can be individually modeled using a theoretical approach and experimental data from various ground tests can be combined into them. For instance, a combination of linear finite element modeling and data from ground vibration tests may be used to obtain a validated structural model. Similarly, an aerodynamic model can be obtained using computational fluid dynamics or simple panel methods and partially updated using limited data from wind tunnel tests. In all cases, the models obtained for these subsystems have a degree of uncertainty owing to inherent assumptions in the theory and errors in experimental data. Suitable uncertain models that account for these uncertainties can be built to study the impact of these modeling errors on the ability to predict dynamic instabilities known as flutter. This thesis addresses the methods used for modeling rigid body dynamics, structural dynamics and unsteady aerodynamics of a blended wing design called the Body Freedom Flutter vehicle. It discusses the procedure used to incorporate data from a wide range of ground based experiments in the form of model uncertainties within these subsystems. Finally, it provides the mathematical tools for carrying out flutter analysis and sensitivity analysis which account for these model uncertainties. These analyses are carried out for both open loop and controller in the loop (closed loop) cases.

  8. Predictive modeling: Solubility of C60 and C70 fullerenes in diverse solvents.

    PubMed

    Gupta, Shikha; Basant, Nikita

    2018-06-01

    Solubility of fullerenes imposes a major limitation to further advanced research and technological development using these novel materials. There have been continued efforts to discover better solvents and their properties that influence the solubility of fullerenes. Here, we have developed QSPR (quantitative structure-property relationship) models based on structural features of diverse solvents and large experimental data for predicting the solubility of C 60 and C 70 fullerenes. The developed models identified most relevant features of the solvents that encode the polarizability, polarity and lipophilicity properties which largely influence the solubilizing potential of the solvent for the fullerenes. We also established Inter-moieties solubility correlations (IMSC) based quantitative property-property relationship (QPPR) models for predicting solubility of C 60 and C 70 fullerenes. The QSPR and QPPR models were internally and externally validated deriving the most stringent statistical criteria and predicted C 60 and C 70 solubility values in different solvents were in close agreement with the experimental values. In test sets, the QSPR models yielded high correlations (R 2  > 0.964) and low root mean squared error of prediction errors (RMSEP< 0.25). Results of comparison with other studies indicated that the proposed models could effectively improve the accuracy and ability for predicting solubility of C 60 and C 70 fullerenes in solvents with diverse structures and would be useful in development of more effective solvents. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Using field inversion to quantify functional errors in turbulence closures

    NASA Astrophysics Data System (ADS)

    Singh, Anand Pratap; Duraisamy, Karthik

    2016-04-01

    A data-informed approach is presented with the objective of quantifying errors and uncertainties in the functional forms of turbulence closure models. The approach creates modeling information from higher-fidelity simulations and experimental data. Specifically, a Bayesian formalism is adopted to infer discrepancies in the source terms of transport equations. A key enabling idea is the transformation of the functional inversion procedure (which is inherently infinite-dimensional) into a finite-dimensional problem in which the distribution of the unknown function is estimated at discrete mesh locations in the computational domain. This allows for the use of an efficient adjoint-driven inversion procedure. The output of the inversion is a full-field of discrepancy that provides hitherto inaccessible modeling information. The utility of the approach is demonstrated by applying it to a number of problems including channel flow, shock-boundary layer interactions, and flows with curvature and separation. In all these cases, the posterior model correlates well with the data. Furthermore, it is shown that even if limited data (such as surface pressures) are used, the accuracy of the inferred solution is improved over the entire computational domain. The results suggest that, by directly addressing the connection between physical data and model discrepancies, the field inversion approach materially enhances the value of computational and experimental data for model improvement. The resulting information can be used by the modeler as a guiding tool to design more accurate model forms, or serve as input to machine learning algorithms to directly replace deficient modeling terms.

  10. Current Controller for Multi-level Front-end Converter and Its Digital Implementation Considerations on Three-level Flying Capacitor Topology

    NASA Astrophysics Data System (ADS)

    Tekwani, P. N.; Shah, M. T.

    2017-10-01

    This paper presents behaviour analysis and digital implementation of current error space phasor based hysteresis controller applied to three-phase three-level flying capacitor converter as front-end topology. The controller is self-adaptive in nature, and takes the converter from three-level to two-level mode of operation and vice versa, following various trajectories of sector change with the change in reference dc-link voltage demanded by the load. It keeps current error space phasor within the prescribed hexagonal boundary. During the contingencies, the proposed controller takes the converter in over modulation mode to meet the load demand, and once the need is satisfied, controller brings back the converter in normal operating range. Simulation results are presented to validate behaviour of controller to meet the said contingencies. Unity power factor is assured by proposed controller with low current harmonic distortion satisfying limits prescribed in IEEE 519-2014. Proposed controller is implemented using TMS320LF2407 16-bit fixed-point digital signal processor. Detailed analysis of numerical format to avoid overflow of sensed variables in processor, and per-unit model implementation in software are discussed and hardware results are presented at various stages of signal conditioning to validate the experimental setup. Control logic for the generation of reference currents is implemented in TMS320LF2407A using assembly language and experimental results are also presented for the same.

  11. Experimental demonstration of large capacity WSDM optical access network with multicore fibers and advanced modulation formats.

    PubMed

    Li, Borui; Feng, Zhenhua; Tang, Ming; Xu, Zhilin; Fu, Songnian; Wu, Qiong; Deng, Lei; Tong, Weijun; Liu, Shuang; Shum, Perry Ping

    2015-05-04

    Towards the next generation optical access network supporting large capacity data transmission to enormous number of users covering a wider area, we proposed a hybrid wavelength-space division multiplexing (WSDM) optical access network architecture utilizing multicore fibers with advanced modulation formats. As a proof of concept, we experimentally demonstrated a WSDM optical access network with duplex transmission using our developed and fabricated multicore (7-core) fibers with 58.7km distance. As a cost-effective modulation scheme for access network, the optical OFDM-QPSK signal has been intensity modulated on the downstream transmission in the optical line terminal (OLT) and it was directly detected in the optical network unit (ONU) after MCF transmission. 10 wavelengths with 25GHz channel spacing from an optical comb generator are employed and each wavelength is loaded with 5Gb/s OFDM-QPSK signal. After amplification, power splitting, and fan-in multiplexer, 10-wavelength downstream signal was injected into six outer layer cores simultaneously and the aggregation downstream capacity reaches 300 Gb/s. -16 dBm sensitivity has been achieved for 3.8 × 10-3 bit error ratio (BER) with 7% Forward Error Correction (FEC) limit for all wavelengths in every core. Upstream signal from ONU side has also been generated and the bidirectional transmission in the same core causes negligible performance degradation to the downstream signal. As a universal platform for wired/wireless data access, our proposed architecture provides additional dimension for high speed mobile signal transmission and we hence demonstrated an upstream delivery of 20Gb/s per wavelength with QPSK modulation formats using the inner core of MCF emulating a mobile backhaul service. The IQ modulated data was coherently detected in the OLT side. -19 dBm sensitivity has been achieved under the FEC limit and more than 18 dB power budget is guaranteed.

  12. Boundary conditions for the Swain-Schaad relationship as a criterion for hydrogen tunneling.

    PubMed

    Kohen, Amnon; Jensen, Jan H

    2002-04-17

    Hydrogen quantum mechanical tunneling has been suggested to play a role in a wide variety of hydrogen-transfer reactions in chemistry and enzymology. An important experimental criterion for tunneling is based on the breakdown of the semiclassical prediction for the relationship among the rates of the three isotopes of hydrogen (hydrogen -H, deuterium -D, and tritium -T). This is denoted the Swain-Schaad relationship. This study examines the breakdown of the Swain-Schaad relationship as criterion for tunneling. The semiclassical (no tunneling) limit used hereto (e.g., 3.34, for H/T to D/T kinetic isotope effects), was based on simple theoretical considerations of a diatomic cleavage of a stable covalent bond, for example, a C-H bond. Yet, most experimental evidence for a tunneling contribution has come from breakdown of those relationship for a secondary hydrogen, that is, not the hydrogen whose bond is being cleaved but its geminal neighbor. Furthermore, many of the reported experiments have been mixed-labeling experiments, in which a secondary H/T kinetic isotope effect was measured for C-H cleavage, while the D/T secondary effect accompanied C-D cleavage. In experiments of this type, the breakdown of the Swain-Schaad relationship indicates both tunneling and the degree of coupled motion between the primary and secondary hydrogens. We found a new semiclassical limit (e.g., 4.8 for H/T to D/T kinetic isotope effects), whose breakdown can serve as a more reliable experimental evidence for tunneling in this common mixed-labeling experiment. We study the tunneling contribution to C-H bond activation, for which many relevant experimental and theoretical data are available. However, these studies can be applied to any hydrogen-transfer reaction. First, an extension of the original approach was applied, and then vibrational analysis studies were carried out for a model system (the enzyme alcohol dehydrogenase). Finally, the effect of complex kinetics on the observed Swain-Schaad relationship was examined. All three methods yield a new semiclassical limit (4.8), above which tunneling must be considered. Yet, it was found that for many cases the original, localized limit (3.34), holds fairly well. For experimental results that are between the original and new limits (within statistical errors), several methods are suggested that can support or exclude tunneling. These new and clearer criteria provide a basis for future applications of the Swain-Schaad relationship to demonstrate tunneling in complex systems.

  13. Stated Choice design comparison in a developing country: recall and attribute nonattendance

    PubMed Central

    2014-01-01

    Background Experimental designs constitute a vital component of all Stated Choice (aka discrete choice experiment) studies. However, there exists limited empirical evaluation of the statistical benefits of Stated Choice (SC) experimental designs that employ non-zero prior estimates in constructing non-orthogonal constrained designs. This paper statistically compares the performance of contrasting SC experimental designs. In so doing, the effect of respondent literacy on patterns of Attribute non-Attendance (ANA) across fractional factorial orthogonal and efficient designs is also evaluated. The study uses a ‘real’ SC design to model consumer choice of primary health care providers in rural north India. A total of 623 respondents were sampled across four villages in Uttar Pradesh, India. Methods Comparison of orthogonal and efficient SC experimental designs is based on several measures. Appropriate comparison of each design’s respective efficiency measure is made using D-error results. Standardised Akaike Information Criteria are compared between designs and across recall periods. Comparisons control for stated and inferred ANA. Coefficient and standard error estimates are also compared. Results The added complexity of the efficient SC design, theorised elsewhere, is reflected in higher estimated amounts of ANA among illiterate respondents. However, controlling for ANA using stated and inferred methods consistently shows that the efficient design performs statistically better. Modelling SC data from the orthogonal and efficient design shows that model-fit of the efficient design outperform the orthogonal design when using a 14-day recall period. The performance of the orthogonal design, with respect to standardised AIC model-fit, is better when longer recall periods of 30-days, 6-months and 12-months are used. Conclusions The effect of the efficient design’s cognitive demand is apparent among literate and illiterate respondents, although, more pronounced among illiterate respondents. This study empirically confirms that relaxing the orthogonality constraint of SC experimental designs increases the information collected in choice tasks, subject to the accuracy of the non-zero priors in the design and the correct specification of a ‘real’ SC recall period. PMID:25386388

  14. Achieving the Heisenberg limit in quantum metrology using quantum error correction.

    PubMed

    Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang

    2018-01-08

    Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.

  15. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    PubMed

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  16. Elucidation of Peptide-Directed Palladium Surface Structure for Biologically Tunable Nanocatalysts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bedford, Nicholas M.; Ramezani-Dakhel, Hadi; Slocik, Joseph M.

    Peptide-enabled synthesis of inorganic nanostructures represents an avenue to access catalytic materials with tunable and optimized properties. This is achieved via peptide complexity and programmability that is missing in traditional ligands for catalytic nanomaterials. Unfortunately, there is limited information available to correlate peptide sequence to particle structure and catalytic activity to date. As such, the application of peptide-enabled nanocatalysts remains limited to trial and error approaches. In this paper, a hybrid experimental and computational approach is introduced to systematically elucidate biomolecule-dependent structure/function relationships for peptide-capped Pd nanocatalysts. Synchrotron X-ray techniques were used to uncover substantial particle surface structural disorder, whichmore » was dependent upon the amino acid sequence of the peptide capping ligand. Nanocatalyst configurations were then determined directly from experimental data using reverse Monte Carlo methods and further refined using molecular dynamics simulation, obtaining thermodynamically stable peptide-Pd nanoparticle configurations. Sequence-dependent catalytic property differences for C-C coupling and olefin hydrogenation were then eluddated by identification of the catalytic active sites at the atomic level and quantitative prediction of relative reaction rates. This hybrid methodology provides a clear route to determine peptide-dependent structure/function relationships, enabling the generation of guidelines for catalyst design through rational tailoring of peptide sequences« less

  17. Square Wave Voltammetry of TNT at Gold Electrodes Modified with Self-Assembled Monolayers Containing Aromatic Structures

    PubMed Central

    Trammell, Scott A.; Zabetakis, Dan; Moore, Martin; Verbarg, Jasenka; Stenger, David A.

    2014-01-01

    Square wave voltammetry for the reduction of 2,4,6-trinitrotoluene (TNT) was measured in 100 mM potassium phosphate buffer (pH 8) at gold electrodes modified with self-assembled monolayers (SAMs) containing either an alkane thiol or aromatic ring thiol structures. At 15 Hz, the electrochemical sensitivity (µA/ppm) was similar for all SAMs tested. However, at 60 Hz, the SAMs containing aromatic structures had a greater sensitivity than the alkane thiol SAM. In fact, the alkane thiol SAM had a decrease in sensitivity at the higher frequency. When comparing the electrochemical response between simulations and experimental data, a general trend was observed in which most of the SAMs had similar heterogeneous rate constants within experimental error for the reduction of TNT. This most likely describes a rate limiting step for the reduction of TNT. However, in the case of the alkane SAM at higher frequency, the decrease in sensitivity suggests that the rate limiting step in this case may be electron tunneling through the SAM. Our results show that SAMs containing aromatic rings increased the sensitivity for the reduction of TNT when higher frequencies were employed and at the same time suppressed the electrochemical reduction of dissolved oxygen. PMID:25549081

  18. Methodical fitting for mathematical models of rubber-like materials

    NASA Astrophysics Data System (ADS)

    Destrade, Michel; Saccomandi, Giuseppe; Sgura, Ivonne

    2017-02-01

    A great variety of models can describe the nonlinear response of rubber to uniaxial tension. Yet an in-depth understanding of the successive stages of large extension is still lacking. We show that the response can be broken down in three steps, which we delineate by relying on a simple formatting of the data, the so-called Mooney plot transform. First, the small-to-moderate regime, where the polymeric chains unfold easily and the Mooney plot is almost linear. Second, the strain-hardening regime, where blobs of bundled chains unfold to stiffen the response in correspondence to the `upturn' of the Mooney plot. Third, the limiting-chain regime, with a sharp stiffening occurring as the chains extend towards their limit. We provide strain-energy functions with terms accounting for each stage that (i) give an accurate local and then global fitting of the data; (ii) are consistent with weak nonlinear elasticity theory and (iii) can be interpreted in the framework of statistical mechanics. We apply our method to Treloar's classical experimental data and also to some more recent data. Our method not only provides models that describe the experimental data with a very low quantitative relative error, but also shows that the theory of nonlinear elasticity is much more robust that seemed at first sight.

  19. Marine electrical resistivity imaging of submarine groundwater discharge: Sensitivity analysis and application in Waquoit Bay, Massachusetts, USA

    USGS Publications Warehouse

    Henderson, Rory; Day-Lewis, Frederick D.; Abarca, Elena; Harvey, Charles F.; Karam, Hanan N.; Liu, Lanbo; Lane, John W.

    2010-01-01

    Electrical resistivity imaging has been used in coastal settings to characterize fresh submarine groundwater discharge and the position of the freshwater/salt-water interface because of the relation of bulk electrical conductivity to pore-fluid conductivity, which in turn is a function of salinity. Interpretation of tomograms for hydrologic processes is complicated by inversion artifacts, uncertainty associated with survey geometry limitations, measurement errors, and choice of regularization method. Variation of seawater over tidal cycles poses unique challenges for inversion. The capabilities and limitations of resistivity imaging are presented for characterizing the distribution of freshwater and saltwater beneath a beach. The experimental results provide new insight into fresh submarine groundwater discharge at Waquoit Bay National Estuarine Research Reserve, East Falmouth, Massachusetts (USA). Tomograms from the experimental data indicate that fresh submarine groundwater discharge may shut down at high tide, whereas temperature data indicate that the discharge continues throughout the tidal cycle. Sensitivity analysis and synthetic modeling provide insight into resolving power in the presence of a time-varying saline water layer. In general, vertical electrodes and cross-hole measurements improve the inversion results regardless of the tidal level, whereas the resolution of surface arrays is more sensitive to time-varying saline water layer.

  20. Elucidation of peptide-directed palladium surface structure for biologically tunable nanocatalysts.

    PubMed

    Bedford, Nicholas M; Ramezani-Dakhel, Hadi; Slocik, Joseph M; Briggs, Beverly D; Ren, Yang; Frenkel, Anatoly I; Petkov, Valeri; Heinz, Hendrik; Naik, Rajesh R; Knecht, Marc R

    2015-05-26

    Peptide-enabled synthesis of inorganic nanostructures represents an avenue to access catalytic materials with tunable and optimized properties. This is achieved via peptide complexity and programmability that is missing in traditional ligands for catalytic nanomaterials. Unfortunately, there is limited information available to correlate peptide sequence to particle structure and catalytic activity to date. As such, the application of peptide-enabled nanocatalysts remains limited to trial and error approaches. In this paper, a hybrid experimental and computational approach is introduced to systematically elucidate biomolecule-dependent structure/function relationships for peptide-capped Pd nanocatalysts. Synchrotron X-ray techniques were used to uncover substantial particle surface structural disorder, which was dependent upon the amino acid sequence of the peptide capping ligand. Nanocatalyst configurations were then determined directly from experimental data using reverse Monte Carlo methods and further refined using molecular dynamics simulation, obtaining thermodynamically stable peptide-Pd nanoparticle configurations. Sequence-dependent catalytic property differences for C-C coupling and olefin hydrogenation were then elucidated by identification of the catalytic active sites at the atomic level and quantitative prediction of relative reaction rates. This hybrid methodology provides a clear route to determine peptide-dependent structure/function relationships, enabling the generation of guidelines for catalyst design through rational tailoring of peptide sequences.

Top