Sample records for error mitigation initiative

  1. Human error mitigation initiative (HEMI) : summary report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operationsmore » indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.« less

  2. Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM

    NASA Astrophysics Data System (ADS)

    Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng

    2015-07-01

    We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.

  3. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    NASA Astrophysics Data System (ADS)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  4. Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Hunsberger, Randolph J

    This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less

  5. The cost of adherence mismeasurement in serious mental illness: a claims-based analysis.

    PubMed

    Shafrin, Jason; Forma, Felicia; Scherer, Ethan; Hatch, Ainslie; Vytlacil, Edward; Lakdawalla, Darius

    2017-05-01

    To quantify how adherence mismeasurement affects the estimated impact of adherence on inpatient costs among patients with serious mental illness (SMI). Proportion of days covered (PDC) is a common claims-based measure of medication adherence. Because PDC does not measure medication ingestion, however, it may inaccurately measure adherence. We derived a formula to correct the bias that occurs in adherence-utilization studies resulting from errors in claims-based measures of adherence. We conducted a literature review to identify the correlation between gold-standard and claims-based adherence measures. We derived a bias-correction methodology to address claims-based medication adherence measurement error. We then applied this methodology to a case study of patients with SMI who initiated atypical antipsychotics in 2 large claims databases. Our literature review identified 6 studies of interest. The 4 most relevant ones measured correlations between 0.38 and 0.91. Our preferred estimate implies that the effect of adherence on inpatient spending estimated from claims data would understate the true effect by a factor of 5.3, if there were no other sources of bias. Although our procedure corrects for measurement error, such error also may amplify or mitigate other potential biases. For instance, if adherent patients are healthier than nonadherent ones, measurement error makes the resulting bias worse. On the other hand, if adherent patients are sicker, measurement error mitigates the other bias. Measurement error due to claims-based adherence measures is worth addressing, alongside other more widely emphasized sources of bias in inference.

  6. Error and Error Mitigation in Low-Coverage Genome Assemblies

    PubMed Central

    Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam

    2011-01-01

    The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033

  7. K-means-clustering-based fiber nonlinearity equalization techniques for 64-QAM coherent optical communication system.

    PubMed

    Zhang, Junfeng; Chen, Wei; Gao, Mingyi; Shen, Gangxiang

    2017-10-30

    In this work, we proposed two k-means-clustering-based algorithms to mitigate the fiber nonlinearity for 64-quadrature amplitude modulation (64-QAM) signal, the training-sequence assisted k-means algorithm and the blind k-means algorithm. We experimentally demonstrated the proposed k-means-clustering-based fiber nonlinearity mitigation techniques in 75-Gb/s 64-QAM coherent optical communication system. The proposed algorithms have reduced clustering complexity and low data redundancy and they are able to quickly find appropriate initial centroids and select correctly the centroids of the clusters to obtain the global optimal solutions for large k value. We measured the bit-error-ratio (BER) performance of 64-QAM signal with different launched powers into the 50-km single mode fiber and the proposed techniques can greatly mitigate the signal impairments caused by the amplified spontaneous emission noise and the fiber Kerr nonlinearity and improve the BER performance.

  8. A day in the life of a volunteer incident commander: errors, pressures and mitigating strategies.

    PubMed

    Bearman, Christopher; Bremner, Peter A

    2013-05-01

    To meet an identified gap in the literature this paper investigates the tasks that a volunteer incident commander needs to carry out during an incident, the errors that can be made and the way that errors are managed. In addition, pressure from goal seduction and situation aversion were also examined. Volunteer incident commanders participated in a two-part interview consisting of a critical decision method interview and discussions about a hierarchical task analysis constructed by the authors. A SHERPA analysis was conducted to further identify potential errors. The results identified the key tasks, errors with extreme risk, pressures from strong situations and mitigating strategies for errors and pressures. The errors and pressures provide a basic set of issues that need to be managed by both volunteer incident commanders and fire agencies. The mitigating strategies identified here suggest some ways that this can be done. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  10. SU-E-J-15: A Patient-Centered Scheme to Mitigate Impacts of Treatment Setup Error

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, L; Southern Medical University, Guangzhou; Tian, Z

    2014-06-01

    Purpose: Current Intensity Modulated Radiation Therapy (IMRT) is plan-centered. At each treatment fraction, we position the patient to match the setup in treatment plan. Inaccurate setup can compromise delivered dose distribution, and hence leading to suboptimal treatments. Moreover, current setup approach via couch shift under image guidance can correct translational errors, while rotational and deformation errors are hard to address. To overcome these problems, we propose in this abstract a patient-centered scheme to mitigate impacts of treatment setup errors. Methods: In the patient-centered scheme, we first position the patient on the couch approximately matching the planned-setup. Our Supercomputing Online Replanningmore » Environment (SCORE) is then employed to design an optimal treatment plan based on the daily patient geometry. It hence mitigates the impacts of treatment setup error and reduces the requirements on setup accuracy. We have conducted simulations studies in 10 head-and-neck (HN) patients to investigate the feasibility of this scheme. Rotational and deformation setup errors were simulated. Specifically, 1, 3, 5, 7 degrees of rotations were put on pitch, roll, and yaw directions; deformation errors were simulated by splitting neck movements into four basic types: rotation, lateral bending, flexion and extension. Setup variation ranges are based on observed numbers in previous studies. Dosimetric impacts of our scheme were evaluated on PTVs and OARs in comparison with original plan dose with original geometry and original plan recalculated dose with new setup geometries. Results: With conventional plan-centered approach, setup error could lead to significant PTV D99 decrease (−0.25∼+32.42%) and contralateral-parotid Dmean increase (−35.09∼+42.90%). The patientcentered approach is effective in mitigating such impacts to 0∼+0.20% and −0.03∼+5.01%, respectively. Computation time is <128 s. Conclusion: Patient-centered scheme is proposed to mitigate setup error impacts using replanning. Its superiority in terms of dosimetric impacts and feasibility has been shown through simulation studies on HN cases.« less

  11. Realtime mitigation of GPS SA errors using Loran-C

    NASA Technical Reports Server (NTRS)

    Braasch, Soo Y.

    1994-01-01

    The hybrid use of Loran-C with the Global Positioning System (GPS) was shown capable of providing a sole-means of enroute air radionavigation. By allowing pilots to fly direct to their destinations, use of this system is resulting in significant time savings and therefore fuel savings as well. However, a major error source limiting the accuracy of GPS is the intentional degradation of the GPS signal known as Selective Availability (SA). SA-induced position errors are highly correlated and far exceed all other error sources (horizontal position error: 100 meters, 95 percent). Realtime mitigation of SA errors from the position solution is highly desirable. How that can be achieved is discussed. The stability of Loran-C signals is exploited to reduce SA errors. The theory behind this technique is discussed and results using bench and flight data are given.

  12. Diagnostic grade wireless ECG monitoring.

    PubMed

    Garudadri, Harinath; Chi, Yuejie; Baker, Steve; Majumdar, Somdeb; Baheti, Pawan K; Ballard, Dan

    2011-01-01

    In remote monitoring of Electrocardiogram (ECG), it is very important to ensure that the diagnostic integrity of signals is not compromised by sensing artifacts and channel errors. It is also important for the sensors to be extremely power efficient to enable wearable form factors and long battery life. We present an application of Compressive Sensing (CS) as an error mitigation scheme at the application layer for wearable, wireless sensors in diagnostic grade remote monitoring of ECG. In our previous work, we described an approach to mitigate errors due to packet losses by projecting ECG data to a random space and recovering a faithful representation using sparse reconstruction methods. Our contributions in this work are twofold. First, we present an efficient hardware implementation of random projection at the sensor. Second, we validate the diagnostic integrity of the reconstructed ECG after packet loss mitigation. We validate our approach on MIT and AHA databases comprising more than 250,000 normal and abnormal beats using EC57 protocols adopted by the Food and Drug Administration (FDA). We show that sensitivity and positive predictivity of a state-of-the-art ECG arrhythmia classifier is essentially invariant under CS based packet loss mitigation for both normal and abnormal beats even at high packet loss rates. In contrast, the performance degrades significantly in the absence of any error mitigation scheme, particularly for abnormal beats such as Ventricular Ectopic Beats (VEB).

  13. New prospective 4D-CT for mitigating the effects of irregular respiratory motion

    NASA Astrophysics Data System (ADS)

    Pan, Tinsu; Martin, Rachael M.; Luo, Dershan

    2017-08-01

    Artifact caused by irregular respiration is a major source of error in 4D-CT imaging. We propose a new prospective 4D-CT to mitigate this source of error without new hardware, software or off-line data-processing on the GE CT scanner. We utilize the cine CT scan in the design of the new prospective 4D-CT. The cine CT scan at each position can be stopped by the operator when an irregular respiration occurs, and resumed when the respiration becomes regular. This process can be repeated at one or multiple scan positions. After the scan, a retrospective reconstruction is initiated on the CT console to reconstruct only the images corresponding to the regular respiratory cycles. The end result is a 4D-CT free of irregular respiration. To prove feasibility, we conducted a phantom and six patient studies. The artifacts associated with the irregular respiratory cycles could be removed from both the phantom and patient studies. A new prospective 4D-CT scanning and processing technique to mitigate the impact of irregular respiration in 4D-CT has been demonstrated. This technique can save radiation dose because the repeat scans are only at the scan positions where an irregular respiration occurs. Current practice is to repeat the scans at all positions. There is no cost to apply this technique because it is applicable on the GE CT scanner without new hardware, software or off-line data-processing.

  14. Methods for Addressing Technology-induced Errors: The Current State.

    PubMed

    Borycki, E; Dexheimer, J W; Hullin Lucay Cossio, C; Gong, Y; Jensen, S; Kaipio, J; Kennebeck, S; Kirkendall, E; Kushniruk, A W; Kuziemsky, C; Marcilly, R; Röhrig, R; Saranto, K; Senathirajah, Y; Weber, J; Takeda, H

    2016-11-10

    The objectives of this paper are to review and discuss the methods that are being used internationally to report on, mitigate, and eliminate technology-induced errors. The IMIA Working Group for Health Informatics for Patient Safety worked together to review and synthesize some of the main methods and approaches associated with technology- induced error reporting, reduction, and mitigation. The work involved a review of the evidence-based literature as well as guideline publications specific to health informatics. The paper presents a rich overview of current approaches, issues, and methods associated with: (1) safe HIT design, (2) safe HIT implementation, (3) reporting on technology-induced errors, (4) technology-induced error analysis, and (5) health information technology (HIT) risk management. The work is based on research from around the world. Internationally, researchers have been developing methods that can be used to identify, report on, mitigate, and eliminate technology-induced errors. Although there remain issues and challenges associated with the methodologies, they have been shown to improve the quality and safety of HIT. Since the first publications documenting technology-induced errors in healthcare in 2005, we have seen in a short 10 years researchers develop ways of identifying and addressing these types of errors. We have also seen organizations begin to use these approaches. Knowledge has been translated into practice in a short ten years whereas the norm for other research areas is of 20 years.

  15. Control of noisy quantum systems: Field-theory approach to error mitigation

    NASA Astrophysics Data System (ADS)

    Hipolito, Rafael; Goldbart, Paul M.

    2016-04-01

    We consider the basic quantum-control task of obtaining a target unitary operation (i.e., a quantum gate) via control fields that couple to the quantum system and are chosen to best mitigate errors resulting from time-dependent noise, which frustrate this task. We allow for two sources of noise: fluctuations in the control fields and fluctuations arising from the environment. We address the issue of control-error mitigation by means of a formulation rooted in the Martin-Siggia-Rose (MSR) approach to noisy, classical statistical-mechanical systems. To do this, we express the noisy control problem in terms of a path integral, and integrate out the noise to arrive at an effective, noise-free description. We characterize the degree of success in error mitigation via a fidelity metric, which characterizes the proximity of the sought-after evolution to ones that are achievable in the presence of noise. Error mitigation is then best accomplished by applying the optimal control fields, i.e., those that maximize the fidelity subject to any constraints obeyed by the control fields. To make connection with MSR, we reformulate the fidelity in terms of a Schwinger-Keldysh (SK) path integral, with the added twist that the "forward" and "backward" branches of the time contour are inequivalent with respect to the noise. The present approach naturally and readily allows the incorporation of constraints on the control fields—a useful feature in practice, given that constraints feature in all real experiments. In addition to addressing the noise average of the fidelity, we consider its full probability distribution. The information content present in this distribution allows one to address more complex questions regarding error mitigation, including, in principle, questions of extreme value statistics, i.e., the likelihood and impact of rare instances of the fidelity and how to harness or cope with their influence. We illustrate this MSR-SK reformulation by considering a model system consisting of a single spin-s freedom (with s arbitrary), focusing on the case of 1 /f noise in the weak-noise limit. We discover that optimal error mitigation is accomplished via a universal control field protocol that is valid for all s , from the qubit (i.e., s =1 /2 ) case to the classical (i.e., s →∞ ) limit. In principle, this MSR-SK approach provides a transparent framework for addressing quantum control in the presence of noise for systems of arbitrary complexity.

  16. Superdense coding interleaved with forward error correction

    DOE PAGES

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  17. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting.

    PubMed

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-11-01

    Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting

    PubMed Central

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-01-01

    Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806

  19. Heuristics and Cognitive Error in Medical Imaging.

    PubMed

    Itri, Jason N; Patel, Sohil H

    2018-05-01

    The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.

  20. Methods for Addressing Technology-Induced Errors: The Current State

    PubMed Central

    Dexheimer, J. W.; Hullin Lucay Cossio, C.; Gong, Y.; Jensen, S.; Kaipio, J.; Kennebeck, S.; Kirkendall, E.; Kushniruk, A. W.; Kuziemsky, C.; Marcilly, R.; Röhrig, R.; Saranto, K.; Senathirajah, Y.; Weber, J.; Takeda, H.

    2016-01-01

    Summary Objectives The objectives of this paper are to review and discuss the methods that are being used internationally to report on, mitigate, and eliminate technology-induced errors. Methods The IMIA Working Group for Health Informatics for Patient Safety worked together to review and synthesize some of the main methods and approaches associated with technology-induced error reporting, reduction, and mitigation. The work involved a review of the evidence-based literature as well as guideline publications specific to health informatics. Results The paper presents a rich overview of current approaches, issues, and methods associated with: (1) safe HIT design, (2) safe HIT implementation, (3) reporting on technology-induced errors, (4) technology-induced error analysis, and (5) health information technology (HIT) risk management. The work is based on research from around the world. Conclusions Internationally, researchers have been developing methods that can be used to identify, report on, mitigate, and eliminate technology-induced errors. Although there remain issues and challenges associated with the methodologies, they have been shown to improve the quality and safety of HIT. Since the first publications documenting technology-induced errors in healthcare in 2005, we have seen in a short 10 years researchers develop ways of identifying and addressing these types of errors. We have also seen organizations begin to use these approaches. Knowledge has been translated into practice in a short ten years whereas the norm for other research areas is of 20 years. PMID:27830228

  1. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  2. Mitigate the impact of transmitter finite extinction ratio using K-means clustering algorithm for 16QAM signal

    NASA Astrophysics Data System (ADS)

    Yu, Miao; Li, Yan; Shu, Tong; Zhang, Yifan; Hong, Xiaobin; Qiu, Jifang; Zuo, Yong; Guo, Hongxiang; Li, Wei; Wu, Jian

    2018-02-01

    A method of recognizing 16QAM signal based on k-means clustering algorithm is proposed to mitigate the impact of transmitter finite extinction ratio. There are pilot symbols with 0.39% overhead assigned to be regarded as initial centroids of k-means clustering algorithm. Simulation result in 10 GBaud 16QAM system shows that the proposed method obtains higher precision of identification compared with traditional decision method for finite ER and IQ mismatch. Specially, the proposed method improves the required OSNR by 5.5 dB, 4.5 dB, 4 dB and 3 dB at FEC limit with ER= 12 dB, 16 dB, 20 dB and 24 dB, respectively, and the acceptable bias error and IQ mismatch range is widened by 767% and 360% with ER =16 dB, respectively.

  3. MO-D-213-02: Quality Improvement Through a Failure Mode and Effects Analysis of Pediatric External Beam Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, J; Lukose, R; Bronson, J

    2015-06-15

    Purpose: To conduct a failure mode and effects analysis (FMEA) as per AAPM Task Group 100 on clinical processes associated with teletherapy, and the development of mitigations for processes with identified high risk. Methods: A FMEA was conducted on clinical processes relating to teletherapy treatment plan development and delivery. Nine major processes were identified for analysis. These steps included CT simulation, data transfer, image registration and segmentation, treatment planning, plan approval and preparation, and initial and subsequent treatments. Process tree mapping was utilized to identify the steps contained within each process. Failure modes (FM) were identified and evaluated with amore » scale of 1–10 based upon three metrics: the severity of the effect, the probability of occurrence, and the detectability of the cause. The analyzed metrics were scored as follows: severity – no harm = 1, lethal = 10; probability – not likely = 1, certainty = 10; detectability – always detected = 1, undetectable = 10. The three metrics were combined multiplicatively to determine the risk priority number (RPN) which defined the overall score for each FM and the order in which process modifications should be deployed. Results: Eighty-nine procedural steps were identified with 186 FM accompanied by 193 failure effects with 213 potential causes. Eighty-one of the FM were scored with a RPN > 10, and mitigations were developed for FM with RPN values exceeding ten. The initial treatment had the most FM (16) requiring mitigation development followed closely by treatment planning, segmentation, and plan preparation with fourteen each. The maximum RPN was 400 and involved target delineation. Conclusion: The FMEA process proved extremely useful in identifying previously unforeseen risks. New methods were developed and implemented for risk mitigation and error prevention. Similar to findings reported for adult patients, the process leading to the initial treatment has an associated high risk.« less

  4. National Aeronautics and Space Administration "threat and error" model applied to pediatric cardiac surgery: error cycles precede ∼85% of patient deaths.

    PubMed

    Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S

    2015-02-01

    We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.

  5. Error mitigation for CCSD compressed imager data

    NASA Astrophysics Data System (ADS)

    Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth; Shahriar, Fazlul; Bonev, George

    2009-08-01

    To efficiently use the limited bandwidth available on the downlink from satellite to ground station, imager data is usually compressed before transmission. Transmission introduces unavoidable errors, which are only partially removed by forward error correction and packetization. In the case of the commonly used CCSD Rice-based compression, it results in a contiguous sequence of dummy values along scan lines in a band of the imager data. We have developed a method capable of using the image statistics to provide a principled estimate of the missing data. Our method outperforms interpolation yet can be performed fast enough to provide uninterrupted data flow. The estimation of the lost data provides significant value to end users who may use only part of the data, may not have statistical tools, or lack the expertise to mitigate the impact of the lost data. Since the locations of the lost data will be clearly marked as meta-data in the HDF or NetCDF header, experts who prefer to handle error mitigation themselves will be free to use or ignore our estimates as they see fit.

  6. Enhanced intercarrier interference mitigation based on encoded bit-sequence distribution inside optical superchannels

    NASA Astrophysics Data System (ADS)

    Torres, Jhon James Granada; Soto, Ana María Cárdenas; González, Neil Guerrero

    2016-10-01

    In the context of gridless optical multicarrier systems, we propose a method for intercarrier interference (ICI) mitigation which allows bit error correction in scenarios of nonspectral flatness between the subcarriers composing the multicarrier system and sub-Nyquist carrier spacing. We propose a hybrid ICI mitigation technique which exploits the advantages of signal equalization at both levels: the physical level for any digital and analog pulse shaping, and the bit-data level and its ability to incorporate advanced correcting codes. The concatenation of these two complementary techniques consists of a nondata-aided equalizer applied to each optical subcarrier, and a hard-decision forward error correction applied to the sequence of bits distributed along the optical subcarriers regardless of prior subchannel quality assessment as performed in orthogonal frequency-division multiplexing modulations for the implementation of the bit-loading technique. The impact of the ICI is systematically evaluated in terms of bit-error-rate as a function of the carrier frequency spacing and the roll-off factor of the digital pulse-shaping filter for a simulated 3×32-Gbaud single-polarization quadrature phase shift keying Nyquist-wavelength division multiplexing system. After the ICI mitigation, a back-to-back error-free decoding was obtained for sub-Nyquist carrier spacings of 28.5 and 30 GHz and roll-off values of 0.1 and 0.4, respectively.

  7. Reprocessing the GRACE-derived gravity field time series based on data-driven method for ocean tide alias error mitigation

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Sneeuw, Nico; Jiang, Weiping

    2017-04-01

    GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.

  8. Error Mitigation for Short-Depth Quantum Circuits

    NASA Astrophysics Data System (ADS)

    Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.

    2017-11-01

    Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.

  9. Microscopic saw mark analysis: an empirical approach.

    PubMed

    Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Peters, Charles

    2015-01-01

    Microscopic saw mark analysis is a well published and generally accepted qualitative analytical method. However, little research has focused on identifying and mitigating potential sources of error associated with the method. The presented study proposes the use of classification trees and random forest classifiers as an optimal, statistically sound approach to mitigate the potential for error of variability and outcome error in microscopic saw mark analysis. The statistical model was applied to 58 experimental saw marks created with four types of saws. The saw marks were made in fresh human femurs obtained through anatomical gift and were analyzed using a Keyence digital microscope. The statistical approach weighed the variables based on discriminatory value and produced decision trees with an associated outcome error rate of 8.62-17.82%. © 2014 American Academy of Forensic Sciences.

  10. Runway safety

    DOT National Transportation Integrated Search

    2010-02-12

    Information provided through analysis of runway incursions is useful in many ways. Analysis of the errors made by pilots, controllers, and vehicle drivers is the first step toward developing error mitigation strategies. Furthermore, successful design...

  11. New Developments in Error Detection and Correction Strategies for Critical Applications

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Ken

    2016-01-01

    The presentation will cover a variety of mitigation strategies that were developed for critical applications. An emphasis is placed on strengths and weaknesses per mitigation technique as it pertains to different FPGA device types.

  12. [Understanding mistake-proofing].

    PubMed

    de Saint Maurice, G; Giraud, N; Ausset, S; Auroy, Y; Lenoir, B; Amalberti, R

    2011-01-01

    The mistake-proofing concept often refers to physical devices that prevent actors from making a wrong action. In anaesthesiology, one immediately thinks to specific design of outlets for medical gases. More generally, the principle of mistake-proofing is to avoid an error, by placing knowledge in the world rather than knowledge in the head. As it often happens in risk management, healthcare has received information transfers from the industry. Computer is changing the concept of mistake-proofing, initially based on physical design, such as aerospace and automotive industry. The mistake-proofing concept may be applied to prevention, detection, and mitigation of errors. The forcing functions are a specific part of mistake-proofing: they prevent a wrong action or they force a virtuous one. Grout proposes a little shortcut to identify mistake-proofing devices: "If it is not possible to picture it in action, it is probably not a mistake-proofing device". Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  13. New Developments in Error Detection and Correction Strategies for Critical Applications

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; Label, Ken

    2017-01-01

    The presentation will cover a variety of mitigation strategies that were developed for critical applications. An emphasis is placed on strengths and weaknesses per mitigation technique as it pertains to different Field programmable gate array (FPGA) device types.

  14. A Unified Approach to Measurement Error and Missing Data: Overview and Applications

    ERIC Educational Resources Information Center

    Blackwell, Matthew; Honaker, James; King, Gary

    2017-01-01

    Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model…

  15. An advanced SEU tolerant latch based on error detection

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao

    2018-05-01

    This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).

  16. Error vector magnitude based parameter estimation for digital filter back-propagation mitigating SOA distortions in 16-QAM.

    PubMed

    Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A

    2013-08-26

    We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.

  17. Incident Learning and Failure-Mode-and-Effects-Analysis Guided Safety Initiatives in Radiation Medicine

    PubMed Central

    Kapur, Ajay; Goode, Gina; Riehl, Catherine; Zuvic, Petrina; Joseph, Sherin; Adair, Nilda; Interrante, Michael; Bloom, Beatrice; Lee, Lucille; Sharma, Rajiv; Sharma, Anurag; Antone, Jeffrey; Riegel, Adam; Vijeh, Lili; Zhang, Honglai; Cao, Yijian; Morgenstern, Carol; Montchal, Elaine; Cox, Brett; Potters, Louis

    2013-01-01

    By combining incident learning and process failure-mode-and-effects-analysis (FMEA) in a structure-process-outcome framework we have created a risk profile for our radiation medicine practice and implemented evidence-based risk-mitigation initiatives focused on patient safety. Based on reactive reviews of incidents reported in our departmental incident-reporting system and proactive FMEA, high safety-risk procedures in our paperless radiation medicine process and latent risk factors were identified. Six initiatives aimed at the mitigation of associated severity, likelihood-of-occurrence, and detectability risks were implemented. These were the standardization of care pathways and toxicity grading, pre-treatment-planning peer review, a policy to thwart delay-rushed processes, an electronic whiteboard to enhance coordination, and the use of six sigma metrics to monitor operational efficiencies. The effectiveness of these initiatives over a 3-years period was assessed using process and outcome specific metrics within the framework of the department structure. There has been a 47% increase in incident-reporting, with no increase in adverse events. Care pathways have been used with greater than 97% clinical compliance rate. The implementation of peer review prior to treatment-planning and use of the whiteboard have provided opportunities for proactive detection and correction of errors. There has been a twofold drop in the occurrence of high-risk procedural delays. Patient treatment start delays are routinely enforced on cases that would have historically been rushed. Z-scores for high-risk procedures have steadily improved from 1.78 to 2.35. The initiatives resulted in sustained reductions of failure-mode risks as measured by a set of evidence-based metrics over a 3-years period. These augment or incorporate many of the published recommendations for patient safety in radiation medicine by translating them to clinical practice. PMID:24380074

  18. An interventional approach for patient and nurse safety: a fatigue countermeasures feasibility study.

    PubMed

    Scott, Linda D; Hofmeister, Nancee; Rogness, Neal; Rogers, Ann E

    2010-01-01

    Studies indicate that extended shifts worked by hospital staff nurses are associated with higher risk of errors. Long work hours coupled with insufficient sleep and fatigue are even riskier. Although other industries have developed programs to reduce fatigue-related errors and injury, fatigue countermeasures program for nurses (FCMPN) are lacking. The objective of this study was to evaluate the feasibility of an FCMPN for improving sleep duration and quality while reducing daytime sleepiness and patient care errors. Selected sleep variables, errors and drowsy driving, were evaluated among hospital staff nurses (n = 47) before and after FCMPN implementation. A one-group pretest-posttest repeated-measures approach was used. Participants provided data 2 weeks before the FCMPN, 4 weeks after receiving the intervention, and again at 3 months after intervention. Most of the nurses experienced poor sleep quality, severe daytime sleepiness, and decreased alertness at work and while operating a motor vehicle. After the FCMPN, significant improvements were noted in sleep duration, sleep quality, alertness, and error prevention. Although significant improvements were not found in daytime sleepiness scores, severity of daytime sleepiness appeared to decrease. Despite improvements in fatigue management, nurses reported feelings of guilt when engaging in FCMPN activities, especially strategic naps and relieved breaks. Initial findings support the feasibility of using an FCMPN for mitigating fatigue, improving sleep, and reducing errors among hospital staff nurses. In future investigations, the acceptability, efficacy, and effectiveness of FCMPNs can be examined.

  19. Influence of uncertain identification of triggering rainfall on the assessment of landslide early warning thresholds

    NASA Astrophysics Data System (ADS)

    Peres, David J.; Cancelliere, Antonino; Greco, Roberto; Bogaard, Thom A.

    2018-03-01

    Uncertainty in rainfall datasets and landslide inventories is known to have negative impacts on the assessment of landslide-triggering thresholds. In this paper, we perform a quantitative analysis of the impacts of uncertain knowledge of landslide initiation instants on the assessment of rainfall intensity-duration landslide early warning thresholds. The analysis is based on a synthetic database of rainfall and landslide information, generated by coupling a stochastic rainfall generator and a physically based hydrological and slope stability model, and is therefore error-free in terms of knowledge of triggering instants. This dataset is then perturbed according to hypothetical reporting scenarios that allow simulation of possible errors in landslide-triggering instants as retrieved from historical archives. The impact of these errors is analysed jointly using different criteria to single out rainfall events from a continuous series and two typical temporal aggregations of rainfall (hourly and daily). The analysis shows that the impacts of the above uncertainty sources can be significant, especially when errors exceed 1 day or the actual instants follow the erroneous ones. Errors generally lead to underestimated thresholds, i.e. lower than those that would be obtained from an error-free dataset. Potentially, the amount of the underestimation can be enough to induce an excessive number of false positives, hence limiting possible landslide mitigation benefits. Moreover, the uncertain knowledge of triggering rainfall limits the possibility to set up links between thresholds and physio-geographical factors.

  20. Mitigating Multipath Bias Using a Dual-Polarization Antenna: Theoretical Performance, Algorithm Design, and Simulation

    PubMed Central

    Xie, Lin; Cui, Xiaowei; Zhao, Sihao; Lu, Mingquan

    2017-01-01

    It is well known that multipath effect remains a dominant error source that affects the positioning accuracy of Global Navigation Satellite System (GNSS) receivers. Significant efforts have been made by researchers and receiver manufacturers to mitigate multipath error in the past decades. Recently, a multipath mitigation technique using dual-polarization antennas has become a research hotspot for it provides another degree of freedom to distinguish the line-of-sight (LOS) signal from the LOS and multipath composite signal without extensively increasing the complexity of the receiver. Numbers of multipath mitigation techniques using dual-polarization antennas have been proposed and all of them report performance improvement over the single-polarization methods. However, due to the unpredictability of multipath, multipath mitigation techniques based on dual-polarization are not always effective while few studies discuss the condition under which the multipath mitigation using a dual-polarization antenna can outperform that using a single-polarization antenna, which is a fundamental question for dual-polarization multipath mitigation (DPMM) and the design of multipath mitigation algorithms. In this paper we analyze the characteristics of the signal received by a dual-polarization antenna and use the maximum likelihood estimation (MLE) to assess the theoretical performance of DPMM in different received signal cases. Based on the assessment we answer this fundamental question and find the dual-polarization antenna’s capability in mitigating short delay multipath—the most challenging one among all types of multipath for the majority of the multipath mitigation techniques. Considering these effective conditions, we propose a dual-polarization sequential iterative maximum likelihood estimation (DP-SIMLE) algorithm for DPMM. The simulation results verify our theory and show superior performance of the proposed DP-SIMLE algorithm over the traditional one using only an RHCP antenna. PMID:28208832

  1. Mitigating leakage errors due to cavity modes in a superconducting quantum computer

    NASA Astrophysics Data System (ADS)

    McConkey, T. G.; Béjanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Pagel, Z.; Rinehart, J. R.; Mariantoni, M.

    2018-07-01

    A practical quantum computer requires quantum bit (qubit) operations with low error probabilities in extensible architectures. We study a packaging method that makes it possible to address hundreds of superconducting qubits by means of coaxial Pogo pins. A qubit chip is housed in a superconducting box, where both box and chip dimensions lead to unwanted modes that can interfere with qubit operations. We analyze these interference effects in the context of qubit coherent leakage and qubit decoherence induced by damped modes. We propose two methods, half-wave fencing and antinode pinning, to mitigate the resulting errors by detuning the resonance frequency of the modes from the qubit frequency. We perform electromagnetic field simulations indicating that the resonance frequency of the modes increases with the number of installed pins and can be engineered to be significantly higher than the highest qubit frequency. We estimate that the error probabilities and decoherence rates due to suitably shifted modes in realistic scenarios can be up to two orders of magnitude lower than the state-of-the-art superconducting qubit error and decoherence rates. Our methods can be extended to different types of packages that do not rely on Pogo pins. Conductive bump bonds, for example, can serve the same purpose in qubit architectures based on flip chip technology. Metalized vias, instead, can be used to mitigate modes due to the increasing size of the dielectric substrate on which qubit arrays are patterned.

  2. The evolution of Crew Resource Management training in commercial aviation

    NASA Technical Reports Server (NTRS)

    Helmreich, R. L.; Merritt, A. C.; Wilhelm, J. A.

    1999-01-01

    In this study, we describe changes in the nature of Crew Resource Management (CRM) training in commercial aviation, including its shift from cockpit to crew resource management. Validation of the impact of CRM is discussed. Limitations of CRM, including lack of cross-cultural generality are considered. An overarching framework that stresses error management to increase acceptance of CRM concepts is presented. The error management approach defines behavioral strategies taught in CRM as error countermeasures that are employed to avoid error, to trap errors committed, and to mitigate the consequences of error.

  3. Total Survey Error & Institutional Research: A Case Study of the University Experience Survey

    ERIC Educational Resources Information Center

    Whiteley, Sonia

    2014-01-01

    Total Survey Error (TSE) is a component of Total Survey Quality (TSQ) that supports the assessment of the extent to which a survey is "fit-for-purpose". While TSQ looks at a number of dimensions, such as relevance, credibility and accessibility, TSE is has a more operational focus on accuracy and minimising errors. Mitigating survey…

  4. Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.

    PubMed

    Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B

    2017-01-01

    In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.

  5. Soft-Decision-Data Reshuffle to Mitigate Pulsed Radio Frequency Interference Impact on Low-Density-Parity-Check Code Performance

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun David

    2011-01-01

    This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.

  6. Single Event Effect Testing of the Analog Devices ADV212

    NASA Technical Reports Server (NTRS)

    Wilcox, Ted; Campola, Michael; Kadari, Madhu; Nadendla, Seshagiri R.

    2017-01-01

    The Analog Devices ADV212 was initially tested for single event effects (SEE) at the Texas AM University Cyclotron Facility (TAMU) in July of 2013. Testing revealed a sensitivity to device hang-ups classified as single event functional interrupts (SEFI), soft data errors classified as single event upsets (SEU), and, of particular concern, single event latch-ups (SEL). All error types occurred so frequently as to make accurate measurements of the exposure time, and thus total particle fluence, challenging. To mitigate some of the risk posed by single event latch-ups, circuitry was added to the electrical design to detect a high current event and automatically recycle power and reboot the device. An additional heavy-ion test was scheduled to validate the operation of the recovery circuitry and the continuing functionality of the ADV212 after a substantial number of latch-up events. As a secondary goal, more precise data would be gathered by an improved test method, described in this test report.

  7. Theoretical and experimental studies of turbo product code with time diversity in free space optical communication.

    PubMed

    Han, Yaoqiang; Dang, Anhong; Ren, Yongxiong; Tang, Junxiong; Guo, Hong

    2010-12-20

    In free space optical communication (FSOC) systems, channel fading caused by atmospheric turbulence degrades the system performance seriously. However, channel coding combined with diversity techniques can be exploited to mitigate channel fading. In this paper, based on the experimental study of the channel fading effects, we propose to use turbo product code (TPC) as the channel coding scheme, which features good resistance to burst errors and no error floor. However, only channel coding cannot cope with burst errors caused by channel fading, interleaving is also used. We investigate the efficiency of interleaving for different interleaving depths, and then the optimum interleaving depth for TPC is also determined. Finally, an experimental study of TPC with interleaving is demonstrated, and we show that TPC with interleaving can significantly mitigate channel fading in FSOC systems.

  8. Trajectory Design to Mitigate Risk on the Transiting Exoplanet Survey Satellite (TESS) Mission

    NASA Technical Reports Server (NTRS)

    Dichmann, Donald

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will employ a highly eccentric Earth orbit, in 2:1 lunar resonance, reached with a lunar flyby preceded by 3.5 phasing loops. The TESS mission has limited propellant and several orbit constraints. Based on analysis and simulation, we have designed the phasing loops to reduce delta-V and to mitigate risk due to maneuver execution errors. We have automated the trajectory design process and use distributed processing to generate and to optimize nominal trajectories, check constraint satisfaction, and finally model the effects of maneuver errors to identify trajectories that best meet the mission requirements.

  9. Estimating IMU heading error from SAR images.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin Walter

    Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

  10. Error Characterization and Mitigation for 16Nm MLC NAND Flash Memory Under Total Ionizing Dose Effect

    NASA Technical Reports Server (NTRS)

    Li, Yue (Inventor); Bruck, Jehoshua (Inventor)

    2018-01-01

    A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.

  11. Management of high-risk perioperative systems.

    PubMed

    Dain, Steven

    2006-06-01

    The perioperative system is a complex system that requires people, materials, and processes to come together in a highly ordered and timely manner. However, when working in this high-risk system, even well-organized, knowledgeable, vigilant, and well-intentioned individuals will eventually make errors. All systems need to be evaluated on a continual basis to reduce the risk of errors, make errors more easily recognizable, and provide methods for error mitigation. A simple approach to risk management that may be applied in clinical medicine is discussed.

  12. Mitigating the Impacts of Climate Nonstationarity on Seasonal Streamflow Predictability in the U.S. Southwest

    NASA Astrophysics Data System (ADS)

    Lehner, Flavio; Wood, Andrew W.; Llewellyn, Dagmar; Blatchford, Douglas B.; Goodbody, Angus G.; Pappenberger, Florian

    2017-12-01

    Seasonal streamflow predictions provide a critical management tool for water managers in the American Southwest. In recent decades, persistent prediction errors for spring and summer runoff volumes have been observed in a number of watersheds in the American Southwest. While mostly driven by decadal precipitation trends, these errors also relate to the influence of increasing temperature on streamflow in these basins. Here we show that incorporating seasonal temperature forecasts from operational global climate prediction models into streamflow forecasting models adds prediction skill for watersheds in the headwaters of the Colorado and Rio Grande River basins. Current dynamical seasonal temperature forecasts now show sufficient skill to reduce streamflow forecast errors in snowmelt-driven regions. Such predictions can increase the resilience of streamflow forecasting and water management systems in the face of continuing warming as well as decadal-scale temperature variability and thus help to mitigate the impacts of climate nonstationarity on streamflow predictability.

  13. IPTV multicast with peer-assisted lossy error control

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  14. 44 CFR 204.51 - Application and approval procedures for a fire management assistance grant.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... part 201 that addresses wildfire risks and mitigation measures; or (ii) Incorporate wildfire mitigation... wildfire risk and contains a wildfire mitigation strategy and related mitigation initiatives. [66 FR 57347...

  15. 44 CFR 204.51 - Application and approval procedures for a fire management assistance grant.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... part 201 that addresses wildfire risks and mitigation measures; or (ii) Incorporate wildfire mitigation... wildfire risk and contains a wildfire mitigation strategy and related mitigation initiatives. [66 FR 57347...

  16. 44 CFR 204.51 - Application and approval procedures for a fire management assistance grant.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... part 201 that addresses wildfire risks and mitigation measures; or (ii) Incorporate wildfire mitigation... wildfire risk and contains a wildfire mitigation strategy and related mitigation initiatives. [66 FR 57347...

  17. 44 CFR 204.51 - Application and approval procedures for a fire management assistance grant.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... part 201 that addresses wildfire risks and mitigation measures; or (ii) Incorporate wildfire mitigation... wildfire risk and contains a wildfire mitigation strategy and related mitigation initiatives. [66 FR 57347...

  18. Trajectory Design Enhancements to Mitigate Risk for the Transiting Exoplanet Survey Satellite (TESS)

    NASA Technical Reports Server (NTRS)

    Dichmann, Donald; Parker, Joel; Nickel, Craig; Lutz, Stephen

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will employ a highly eccentric Earth orbit, in 2:1 lunar resonance, which will be reached with a lunar flyby preceded by 3.5 phasing loops. The TESS mission has limited propellant and several constraints on the science orbit and on the phasing loops. Based on analysis and simulation, we have designed the phasing loops to reduce delta-V (DV) and to mitigate risk due to maneuver execution errors. We have automated the trajectory design process and use distributed processing to generate and optimal nominal trajectories; to check constraint satisfaction; and finally to model the effects of maneuver errors to identify trajectories that best meet the mission requirements.

  19. Methods for increasing cooperation rates for surveys of family forest owners

    Treesearch

    Brett J. Butler; Jaketon H. Hewes; Mary L. Tyrrell; Sarah M. Butler

    2016-01-01

    To maximize the representativeness of results from surveys, coverage, sampling, nonresponse, measurement, and analysis errors must be minimized. Although not a cure-all, one approach for mitigating nonresponse errors is to maximize cooperation rates. In this study, personalizing mailings, token financial incentives, and the use of real stamps were tested for their...

  20. Mitigating Errors of Representation: A Practical Case Study of the University Experience Survey

    ERIC Educational Resources Information Center

    Whiteley, Sonia

    2014-01-01

    The Total Survey Error (TSE) paradigm provides a framework that supports the effective planning of research, guides decision making about data collection and contextualises the interpretation and dissemination of findings. TSE also allows researchers to systematically evaluate and improve the design and execution of ongoing survey programs and…

  1. RETRACTED — PMD mitigation through interleaving LDPC codes with polarization scramblers

    NASA Astrophysics Data System (ADS)

    Han, Dahai; Chen, Haoran; Xi, Lixia

    2012-11-01

    The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved as an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this paper as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10 MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes brings incremental performance of error correction, and the PMD tolerance is 10 ps at OSNR=11.4 dB. The results show that the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.

  2. PMD mitigation through interleaving LDPC codes with polarization scramblers

    NASA Astrophysics Data System (ADS)

    Han, Dahai; Chen, Haoran; Xi, Lixia

    2013-09-01

    The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this article as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes bring incremental performance of error correction, and the PMD tolerance is 10ps at OSNR=11.4dB. The results show the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.

  3. Real-time sensing and gas jet mitigation of VDEs on Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Granetz, R. S.; Wolfe, S. M.; Izzo, V. A.; Reinke, M. L.; Terry, J. L.; Hughes, J. W.; Zhurovich, K.; Whyte, D. G.; Bakhtiari, M.; Wurden, G.

    2006-10-01

    Experiments have been carried out in Alcator C-Mod to test the effectiveness of gas jet disruption mitigation of VDEs with real-time detection and triggering by the C-Mod digital plasma control system (DPCS). The DPCS continuously computes the error in the plasma vertical position from the magnetics diagnostics. When this error exceeds an adjustable preset value, the DPCS triggers the gas jet valve (with a negligible latency time). The high-pressure gas (argon) only takes a few milliseconds to enter the vacuum chamber and begin affecting the plasma, but this is comparable to the VDE timescale on C-Mod. Nevertheless, gas jet injection reduced the halo current, increased the radiated power fraction, and reduced the heating of the divertor compared to unmitigated disruptions, but not quite as well as in earlier mitigation experiments with vertically stable plasmas. Presumably a faster overall response time would be beneficial, and several ways to achieve this will also be discussed.

  4. A Frequency-Domain Multipath Parameter Estimation and Mitigation Method for BOC-Modulated GNSS Signals

    PubMed Central

    Sun, Chao; Feng, Wenquan; Du, Songlin

    2018-01-01

    As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589

  5. A Conceptual Framework for Decision-making Support in Uncertainty- and Risk-based Diagnosis of Rare Clinical Cases by Specialist Physicians.

    PubMed

    Santos, Adriano A; Moura, J Antão B; de Araújo, Joseana Macêdo Fechine Régis

    2015-01-01

    Mitigating uncertainty and risks faced by specialist physicians in analysis of rare clinical cases is something desired by anyone who needs health services. The number of clinical cases never seen by these experts, with little documentation, may introduce errors in decision-making. Such errors negatively affect well-being of patients, increase procedure costs, rework, health insurance premiums, and impair the reputation of specialists and medical systems involved. In this context, IT and Clinical Decision Support Systems (CDSS) play a fundamental role, supporting decision-making process, making it more efficient and effective, reducing a number of avoidable medical errors and enhancing quality of treatment given to patients. An investigation has been initiated to look into characteristics and solution requirements of this problem, model it, propose a general solution in terms of a conceptual risk-based, automated framework to support rare-case medical diagnostics and validate it by means of case studies. A preliminary validation study of the proposed framework has been carried out by interviews conducted with experts who are practicing professionals, academics, and researchers in health care. This paper summarizes the investigation and its positive results. These results motivate continuation of research towards development of the conceptual framework and of a software tool that implements the proposed model.

  6. Two-layer critical dimensions and overlay process window characterization and improvement in full-chip computational lithography

    NASA Astrophysics Data System (ADS)

    Sturtevant, John L.; Liubich, Vlad; Gupta, Rachit

    2016-04-01

    Edge placement error (EPE) was a term initially introduced to describe the difference between predicted pattern contour edge and the design target for a single design layer. Strictly speaking, this quantity is not directly measurable in the fab. What is of vital importance is the relative edge placement errors between different design layers, and in the era of multipatterning, the different constituent mask sublayers for a single design layer. The critical dimensions (CD) and overlay between two layers can be measured in the fab, and there has always been a strong emphasis on control of overlay between design layers. The progress in this realm has been remarkable, accelerated in part at least by the proliferation of multipatterning, which reduces the available overlay budget by introducing a coupling of overlay and CD errors for the target layer. Computational lithography makes possible the full-chip assessment of two-layer edge to edge distances and two-layer contact overlap area. We will investigate examples of via-metal model-based analysis of CD and overlay errors. We will investigate both single patterning and double patterning. For single patterning, we show the advantage of contour-to-contour simulation over contour to target simulation, and how the addition of aberrations in the optical models can provide a more realistic CD-overlay process window (PW) for edge placement errors. For double patterning, the interaction of 4-layer CD and overlay errors is very complex, but we illustrate that not only can full-chip verification identify potential two-layer hotspots, the optical proximity correction engine can act to mitigate such hotspots and enlarge the joint CD-overlay PW.

  7. [Pre-hospital treatment of ophidian accidents: review, update, and current problems].

    PubMed

    Gil-Alarcón, Guillermo; Sánchez-Villegas, María Del Carmen; Hugo Reynoso, Víctor

    2011-01-01

    Mythic, out-dated, ambiguous and sometimes iatrogenic procedures, still remain in pre-hospital and hospital ophidian accident treatment. Errors, omissions and ignorance make ophidian accidents appear more dangerous than they truly are, resulting in a general public contempt toward snakes. Here we present an updated review of current knowledge on pre-hospital and hospital treatment of ophidian bite incidents, including indications, recommendations and logic errors. We describe an appropriate treatment for native Mexican poisonous snakebites using fabotherapics, based on our experience. Adequate initial pre-hospital and hospital management is crucial for a successful outcome of this medical emergency. We describe the state of the art in snake bite research discussing those procedures where research is needed to implement them either by the patient, first responders, paramedics and doctors. We suggest proposals to achieve even more efficient management of fabotherapics based on support networks. Finally, we emphasize prevention as the main subject of venom bite treatment, as it is always more adequate and economic to invest in prevention than to spend on mitigation during emergency and recovery.

  8. Low Probability Tail Event Analysis and Mitigation in BPA Control Area: Task 2 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Shuai; Makarov, Yuri V.; McKinstry, Craig A.

    Task report detailing low probability tail event analysis and mitigation in BPA control area. Tail event refers to the situation in a power system when unfavorable forecast errors of load and wind are superposed onto fast load and wind ramps, or non-wind generators falling short of scheduled output, causing the imbalance between generation and load to become very significant.

  9. Mannitol dosing error during interfacility transfer for intracranial emergencies.

    PubMed

    Elliott, Cameron A; MacKenzie, Mark; O'Kelly, Cian J

    2015-11-01

    Mannitol is commonly used to treat elevated intracranial pressure (ICP). The authors analyzed mannitol dosing errors at peripheral hospitals prior to or during transport to tertiary care facilities for intracranial emergencies. They also investigated the appropriateness of mannitol use based on the 2007 Brain Trauma Foundation guidelines for severe traumatic brain injury. The authors conducted a retrospective review of the Shock Trauma Air Rescue Society (STARS) electronic patient database of helicopter medical evacuations in Alberta, Canada, between 2004 and 2012, limited to patients receiving mannitol before transfer. They extracted data on mannitol administration and patient characteristics, including diagnosis, mechanism, Glasgow Coma Scale score, weight, age, and pupil status. A total of 120 patients with an intracranial emergency received a mannitol infusion initiated at a peripheral hospital (median Glasgow Coma Scale score 6; range 3-13). Overall, there was a 22% dosing error rate, which comprised an underdosing rate (<0.25 g/kg) of 8.3% (10 of 120 patients), an overdosing rate (>1.5 g/kg) of 7.5% (9 of 120), and a nonbolus administration rate (>1 hour) of 6.7% (8 of 120). Overall, 72% of patients had a clear indication to receive mannitol as defined by meeting at least one of the following criteria based on Brain Trauma Foundation guidelines: neurological deterioration (11%), severe traumatic brain injury (69%), or pupillary abnormality (25%). Mannitol administration at peripheral hospitals is prone to dosing error. Strategies such as a pretransport checklist may mitigate this risk.

  10. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    NASA Astrophysics Data System (ADS)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  11. Characteristics of BeiDou Navigation Satellite System Multipath and Its Mitigation Method Based on Kalman Filter and Rauch-Tung-Striebel Smoother.

    PubMed

    Zhang, Qiuzhao; Yang, Wei; Zhang, Shubi; Liu, Xin

    2018-01-12

    Global Navigation Satellite System (GNSS) carrier phase measurement for short baseline meets the requirements of deformation monitoring of large structures. However, the carrier phase multipath effect is the main error source with double difference (DD) processing. There are lots of methods to deal with the multipath errors of Global Position System (GPS) carrier phase data. The BeiDou navigation satellite System (BDS) multipath mitigation is still a research hotspot because the unique constellation design of BDS makes it different to mitigate multipath effects compared to GPS. Multipath error periodically repeats for its strong correlation to geometry of satellites, reflective surface and antenna which is also repetitive. We analyzed the characteristics of orbital periods of BDS satellites which are consistent with multipath repeat periods of corresponding satellites. The results show that the orbital periods and multipath periods for BDS geostationary earth orbit (GEO) and inclined geosynchronous orbit (IGSO) satellites are about one day but the periods of MEO satellites are about seven days. The Kalman filter (KF) and Rauch-Tung-Striebel Smoother (RTSS) was introduced to extract the multipath models from single difference (SD) residuals with traditional sidereal filter (SF). Wavelet filter and Empirical mode decomposition (EMD) were also used to mitigate multipath effects. The experimental results show that the three filters methods all have obvious effect on improvement of baseline accuracy and the performance of KT-RTSS method is slightly better than that of wavelet filter and EMD filter. The baseline vector accuracy on east, north and up (E, N, U) components with KF-RTSS method were improved by 62.8%, 63.6%, 62.5% on day of year 280 and 57.3%, 53.4%, 55.9% on day of year 281, respectively.

  12. 12 CFR 217.161 - Qualification requirements for incorporation of operational risk mitigants.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... operational risk exposure to reflect qualifying operational risk mitigants if: (1) The Board-regulated...) Qualifying operational risk mitigants. Qualifying operational risk mitigants are: (1) Insurance that: (i) Is... institution assigns the company is assigned a PD equal to or less than 10 basis points; (ii) Has an initial...

  13. 12 CFR 324.161 - Qualification requirements for incorporation of operational risk mitigants.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... operational risk exposure to reflect qualifying operational risk mitigants if: (1) The FDIC-supervised...) Qualifying operational risk mitigants. Qualifying operational risk mitigants are: (1) Insurance that: (i) Is... institution assigns the company is assigned a PD equal to or less than 10 basis points; (ii) Has an initial...

  14. Both channel coding and wavefront correction on the turbulence mitigation of optical communications using orbital angular momentum multiplexing

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Zou, Li; Gong, Longyan; Cheng, Weiwen; Zheng, Baoyu; Chen, Hanwu

    2016-10-01

    A free-space optical (FSO) communication link with multiplexed orbital angular momentum (OAM) modes has been demonstrated to largely enhance the system capacity without a corresponding increase in spectral bandwidth, but the performance of the link is unavoidably degraded by atmospheric turbulence (AT). In this paper, we propose a turbulence mitigation scheme to improve AT tolerance of the OAM-multiplexed FSO communication link using both channel coding and wavefront correction. In the scheme, we utilize a wavefront correction method to mitigate the phase distortion first, and then we use a channel code to further correct the errors in each OAM mode. The improvement of AT tolerance is discussed over the performance of the link with or without channel coding/wavefront correction. The results show that the bit error rate performance has been improved greatly. The detrimental effect of AT on the OAM-multiplexed FSO communication link could be removed by the proposed scheme even in the relatively strong turbulence regime, such as Cn2 = 3.6 ×10-14m - 2 / 3.

  15. Imaging and dosimetric errors in 4D PET/CT-guided radiotherapy from patient-specific respiratory patterns: a dynamic motion phantom end-to-end study

    NASA Astrophysics Data System (ADS)

    Bowen, S. R.; Nyflot, M. J.; Herrmann, C.; Groh, C. M.; Meyer, J.; Wollenweber, S. D.; Stearns, C. W.; Kinahan, P. E.; Sandison, G. A.

    2015-05-01

    Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery.

  16. Imaging and dosimetric errors in 4D PET/CT-guided radiotherapy from patient-specific respiratory patterns: a dynamic motion phantom end-to-end study.

    PubMed

    Bowen, S R; Nyflot, M J; Herrmann, C; Groh, C M; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A

    2015-05-07

    Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [(18)F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery.

  17. Imaging and dosimetric errors in 4D PET/CT-guided radiotherapy from patient-specific respiratory patterns: a dynamic motion phantom end-to-end study

    PubMed Central

    Bowen, S R; Nyflot, M J; Hermann, C; Groh, C; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A

    2015-01-01

    Effective positron emission tomography/computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by 6 different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy (VMAT) were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses (EUD), and 2%-2mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10–20%, treatment planning errors were 5–10%, and treatment delivery errors were 5–30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5–10% in PET/CT imaging, < 5% in treatment planning, and < 2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery. PMID:25884892

  18. Enhancing clinical evidence by proactively building quality into clinical trials.

    PubMed

    Meeker-O'Connell, Ann; Glessner, Coleen; Behm, Mark; Mulinde, Jean; Roach, Nancy; Sweeney, Fergus; Tenaerts, Pamela; Landray, Martin J

    2016-08-01

    Stakeholders across the clinical trial enterprise have expressed concern that the current clinical trial enterprise is unsustainable. The cost and complexity of trials have continued to increase, threatening our ability to generate reliable evidence essential for making appropriate decisions concerning the benefits and harms associated with clinical interventions. Overcoming this inefficiency rests on improving protocol design, trial planning, and quality oversight. The Clinical Trials Transformation Initiative convened a project to evaluate methods to prospectively build quality into the scientific and operational design of clinical trials ("quality-by-design"), such that trials are feasible to conduct and important errors are prevented rather than remediated. A working group evaluated aspects of trial design and oversight and developed the Clinical Trials Transformation Initiative quality-by-design principles document, outlining a series of factors generally relevant to the reliability of trial conclusions and to patient safety. These principles were then applied and further refined during a series of hands-on workshops to evaluate their utility in facilitating proactive, cross-functional dialogue, and decision-making about trial design and planning. Following these workshops, independent qualitative interviews were conducted with 19 workshop attendees to explore the potential challenges for implementing a quality-by-design approach to clinical trials. The Clinical Trials Transformation Initiative project team subsequently developed recommendations and an online resource guide to support implementation of this approach. The Clinical Trials Transformation Initiative quality-by-design principles provide a framework for assuring that clinical trials adequately safeguard participants and provide reliable information on which to make decisions on the effects of treatments. The quality-by-design workshops highlighted the value of active discussions incorporating the different perspectives within and external to an organization (e.g. clinical investigators, research site staff, and trial participants) in improving trial design. Workshop participants also recognized the value of focusing oversight on those aspects of the trial where errors would have a major impact on participant safety and reliability of results. Applying the Clinical Trials Transformation Initiative quality-by-design recommendations and principles should enable organizations to prioritize the most critical determinants of a trial's quality, identify non-essential activities that can be eliminated to streamline trial conduct and oversight, and formulate appropriate plans to define, avoid, mitigate, monitor, and address important errors. © The Author(s) 2016.

  19. Fabrication of spherical mitigation pit on KH2PO4 crystal by micro-milling and modeling of its induced light intensification.

    PubMed

    Cheng, Jian; Chen, Mingjun; Liao, Wei; Wang, Haijun; Xiao, Yong; Li, Mingquan

    2013-07-15

    Micro-machining is the most promising method for KH(2)PO(4) crystal to mitigate the surface damage growth in high power laser system. In this work, spherical mitigation pit is fabricated by micro-milling with an efficient machining procedure. The light intensification caused by rear surface features before and after mitigation is numerically modeled based on the finite-difference time-domain method. The results indicate that the occurrence of total internal reflections should be responsible for the largest light intensification inside the crystal. For spherical pits after mitigation, the light intensification can be greatly alleviated by preventing the occurrence of total internal reflections. The light intensification caused by spherical mitigation pit is strongly dependent on the width-depth ratio and it is suggested that the width-depth ratio of spherical mitigation pit must be devised to be larger than 5.0 to achieve the minimal light intensification for the mitigation of surface damage growth. Laser damage tests for KH(2)PO(4) crystal validate that the laser damage resistance of initially damaged surface can be retrieved to near the level of ideal surface by replacing initial damage site with predesigned mitigation pit.

  20. Method for producing damage resistant optics

    DOEpatents

    Hackel, Lloyd A.; Burnham, Alan K.; Penetrante, Bernardino M.; Brusasco, Raymond M.; Wegner, Paul J.; Hrubesh, Lawrence W.; Kozlowski, Mark R.; Feit, Michael D.

    2003-01-01

    The present invention provides a system that mitigates the growth of surface damage in an optic. Damage to the optic is minimally initiated. In an embodiment of the invention, damage sites in the optic are initiated, located, and then treated to stop the growth of the damage sites. The step of initiating damage sites in the optic includes a scan of the optic using a laser to initiate defects. The exact positions of the initiated sites are identified. A mitigation process is performed that locally or globally removes the cause of subsequent growth of the damaged sites.

  1. Characteristics of the BDS Carrier Phase Multipath and Its Mitigation Methods in Relative Positioning

    PubMed Central

    Dai, Wujiao; Shi, Qiang; Cai, Changsheng

    2017-01-01

    The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively. PMID:28387744

  2. Characteristics of the BDS Carrier Phase Multipath and Its Mitigation Methods in Relative Positioning.

    PubMed

    Dai, Wujiao; Shi, Qiang; Cai, Changsheng

    2017-04-07

    The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively.

  3. New Developments in FPGA: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth A.; Pellish, Jonathan

    2016-01-01

    It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.

  4. New Developments in FPGA: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; LaBel, Kenneth; Pellish, Jonathan

    2015-01-01

    It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.

  5. Comparing Pain and Depressive Symptoms of Chronic Opioid Therapy Patients Receiving Dose Reduction and Risk Mitigation Initiatives With Usual Care.

    PubMed

    Thakral, Manu; Walker, Rod L; Saunders, Kathleen; Shortreed, Susan M; Parchman, Michael; Hansen, Ryan N; Ludman, Evette; Sherman, Karen J; Dublin, Sascha; Von Korff, Michael

    2018-01-01

    Dose reduction and risk mitigation initiatives have been recommended to reduce opioid-related risks among patients receiving chronic opioid therapy (COT), but questions remain over whether these initiatives worsen pain control and quality of life. In 2014 to 2015, we interviewed 1,588 adult COT patients within a health care system in Washington State and compared those who received dose reduction and risk mitigation initiatives in primary care clinics (intervention) with patients in comparable health care settings without initiatives (control). The primary outcomes were pain assessed using the pain, enjoyment, and general activity (PEG) scale, a 3-item scale to assess global pain intensity and interference, with secondary measures including depression (Patient Health Questionnaire-8 scale). Generalized estimating equations for linear regression models were used to estimate differences in mean scores between intervention and control sites. Estimated differences, adjusted for patient characteristics and weighted for nonresponse, between patients at intervention and control clinics were not clinically significant for the PEG (-.03, 95% confidence interval = -.25 to .19) or Patient Health Questionnaire-8 (-.64, 95% confidence interval = -1.19 to -.08). We found no evidence that COT patients in clinics with dose reduction and risk mitigation initiatives had clinically meaningful differences in pain intensity, interference with activities and enjoyment of life, or depressive symptoms compared with control health care settings. This article evaluates the effect of dose reduction and risk mitigation initiatives, such as those recently recommended by the Centers for Disease Control and Prevention, to reduce risks associated with COT on global pain and interference, depressive symptoms, and perceived pain relief and bothersomeness of side effects. Copyright © 2017 The American Pain Society. Published by Elsevier Inc. All rights reserved.

  6. First order error corrections in common introductory physics experiments

    NASA Astrophysics Data System (ADS)

    Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team

    As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.

  7. Mitigating voltage lead errors of an AC Josephson voltage standard by impedance matching

    NASA Astrophysics Data System (ADS)

    Zhao, Dongsheng; van den Brom, Helko E.; Houtzager, Ernest

    2017-09-01

    A pulse-driven AC Josephson voltage standard (ACJVS) generates calculable AC voltage signals at low temperatures, whereas measurements are performed with a device under test (DUT) at room temperature. The voltage leads cause the output voltage to show deviations that scale with the frequency squared. Error correction mechanisms investigated so far allow the ACJVS to be operational for frequencies up to 100 kHz. In this paper, calculations are presented to deal with these errors in terms of reflected waves. Impedance matching at the source side of the system, which is loaded with a high-impedance DUT, is proposed as an accurate method to mitigate these errors for frequencies up to 1 MHz. Simulations show that the influence of non-ideal component characteristics, such as the tolerance of the matching resistor, the capacitance of the load input impedance, losses in the voltage leads, non-homogeneity in the voltage leads, a non-ideal on-chip connection and inductors between the Josephson junction array and the voltage leads, can be corrected for using the proposed procedures. The results show that an expanded uncertainty of 12 parts in 106 (k  =  2) at 1 MHz and 0.5 part in 106 (k  =  2) at 100 kHz is within reach.

  8. Cognition in Space Workshop. 1; Metrics and Models

    NASA Technical Reports Server (NTRS)

    Woolford, Barbara; Fielder, Edna

    2005-01-01

    "Cognition in Space Workshop I: Metrics and Models" was the first in a series of workshops sponsored by NASA to develop an integrated research and development plan supporting human cognition in space exploration. The workshop was held in Chandler, Arizona, October 25-27, 2004. The participants represented academia, government agencies, and medical centers. This workshop addressed the following goal of the NASA Human System Integration Program for Exploration: to develop a program to manage risks due to human performance and human error, specifically ones tied to cognition. Risks range from catastrophic error to degradation of efficiency and failure to accomplish mission goals. Cognition itself includes memory, decision making, initiation of motor responses, sensation, and perception. Four subgoals were also defined at the workshop as follows: (1) NASA needs to develop a human-centered design process that incorporates standards for human cognition, human performance, and assessment of human interfaces; (2) NASA needs to identify and assess factors that increase risks associated with cognition; (3) NASA needs to predict risks associated with cognition; and (4) NASA needs to mitigate risk, both prior to actual missions and in real time. This report develops the material relating to these four subgoals.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batista, Antonio J. N.; Santos, Bruno; Fernandes, Ana

    The data acquisition and control instrumentation cubicles room of the ITER tokamak will be irradiated with neutrons during the fusion reactor operation. A Virtex-6 FPGA from Xilinx (XC6VLX365T-1FFG1156C) is used on the ATCA-IO-PROCESSOR board, included in the ITER Catalog of I and C products - Fast Controllers. The Virtex-6 is a re-programmable logic device where the configuration is stored in Static RAM (SRAM), functional data stored in dedicated Block RAM (BRAM) and functional state logic in Flip-Flops. Single Event Upsets (SEU) due to the ionizing radiation of neutrons causes soft errors, unintended changes (bit-flips) to the values stored in statemore » elements of the FPGA. The SEU monitoring and soft errors repairing, when possible, were explored in this work. An FPGA built-in Soft Error Mitigation (SEM) controller detects and corrects soft errors in the FPGA configuration memory. Novel SEU sensors with Error Correction Code (ECC) detect and repair the BRAM memories. Proper management of SEU can increase reliability and availability of control instrumentation hardware for nuclear applications. The results of the tests performed using the SEM controller and the BRAM SEU sensors are presented for a Virtex-6 FPGA (XC6VLX240T-1FFG1156C) when irradiated with neutrons from the Portuguese Research Reactor (RPI), a 1 MW nuclear fission reactor operated by IST in the neighborhood of Lisbon. Results show that the proposed SEU mitigation technique is able to repair the majority of the detected SEU errors in the configuration and BRAM memories. (authors)« less

  10. Ecological assessment of a wetlands mitigation bank (Phase I: baseline ecological conditions and initial restoration efforts)

    DOT National Transportation Integrated Search

    2001-08-01

    The Tulula Wetlands Mitigation Bank, the first wetlands mitigation bank in the Blue Ridge Province of North Carolina, was created to compensate for losses resulting from highway projects in western North Carolina. The overall objective for the Tulula...

  11. The use of source memory to identify one's own episodic confusion errors.

    PubMed

    Smith, S M; Tindell, D R; Pierce, B H; Gilliland, T R; Gerkens, D R

    2001-03-01

    In 4 category cued recall experiments, participants falsely recalled nonlist common members, a semantic confusion error. Errors were more likely if critical nonlist words were presented on an incidental task, causing source memory failures called episodic confusion errors. Participants could better identify the source of falsely recalled words if they had deeply processed the words on the incidental task. For deep but not shallow processing, participants could reliably include or exclude incidentally shown category members in recall. The illusion that critical items actually appeared on categorized lists was diminished but not eradicated when participants identified episodic confusion errors post hoc among their own recalled responses; participants often believed that critical items had been on both the incidental task and the study list. Improved source monitoring can potentially mitigate episodic (but not semantic) confusion errors.

  12. Satellite-based Calibration of Heat Flux at the Ocean Surface

    NASA Astrophysics Data System (ADS)

    Barron, C. N.; Dastugue, J. M.; May, J. C.; Rowley, C. D.; Smith, S. R.; Spence, P. L.; Gremes-Cordero, S.

    2016-02-01

    Model forecasts of upper ocean heat content and variability on diurnal to daily scales are highly dependent on estimates of heat flux through the air-sea interface. Satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. Traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle. Subsequent evolution depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. The COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates) endeavors to correct ocean forecast bias through a responsive error partition among surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using Navy operational global or regional atmospheric forcing. COFFEE addresses satellite-calibration of surface fluxes to estimate surface error covariances and links these to the ocean interior. Experiment cases combine different levels of flux calibration with different assimilation alternatives. The cases may use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is equally applicable to other regions. These approaches within a 3DVAR application are anticipated to be useful for global and larger regional domains where a full 4DVAR methodology may be cost-prohibitive.

  13. An ensemble-ANFIS based uncertainty assessment model for forecasting multi-scalar standardized precipitation index

    NASA Astrophysics Data System (ADS)

    Ali, Mumtaz; Deo, Ravinesh C.; Downs, Nathan J.; Maraseni, Tek

    2018-07-01

    Forecasting drought by means of the World Meteorological Organization-approved Standardized Precipitation Index (SPI) is considered to be a fundamental task to support socio-economic initiatives and effectively mitigating the climate-risk. This study aims to develop a robust drought modelling strategy to forecast multi-scalar SPI in drought-rich regions of Pakistan where statistically significant lagged combinations of antecedent SPI are used to forecast future SPI. With ensemble-Adaptive Neuro Fuzzy Inference System ('ensemble-ANFIS') executed via a 10-fold cross-validation procedure, a model is constructed by randomly partitioned input-target data. Resulting in 10-member ensemble-ANFIS outputs, judged by mean square error and correlation coefficient in the training period, the optimal forecasts are attained by the averaged simulations, and the model is benchmarked with M5 Model Tree and Minimax Probability Machine Regression (MPMR). The results show the proposed ensemble-ANFIS model's preciseness was notably better (in terms of the root mean square and mean absolute error including the Willmott's, Nash-Sutcliffe and Legates McCabe's index) for the 6- and 12- month compared to the 3-month forecasts as verified by the largest error proportions that registered in smallest error band. Applying 10-member simulations, ensemble-ANFIS model was validated for its ability to forecast severity (S), duration (D) and intensity (I) of drought (including the error bound). This enabled uncertainty between multi-models to be rationalized more efficiently, leading to a reduction in forecast error caused by stochasticity in drought behaviours. Through cross-validations at diverse sites, a geographic signature in modelled uncertainties was also calculated. Considering the superiority of ensemble-ANFIS approach and its ability to generate uncertainty-based information, the study advocates the versatility of a multi-model approach for drought-risk forecasting and its prime importance for estimating drought properties over confidence intervals to generate better information for strategic decision-making.

  14. Wiener filter preprocessing for OFDM systems in the presence of both nonstationary and stationary phase noises

    NASA Astrophysics Data System (ADS)

    Zhong, Ke; Lei, Xia; Li, Shaoqian

    2013-12-01

    Statistics-based intercarrier interference (ICI) mitigation algorithm is proposed for orthogonal frequency division multiplexing systems in presence of both nonstationary and stationary phase noises. By utilizing the statistics of phase noise, which can be obtained from measurements or data sheets, a Wiener filter preprocessing algorithm for ICI mitigation is proposed. The proposed algorithm can be regarded as a performance-improving technique for the previous researches on phase noise cancelation. Simulation results show that the proposed algorithm can effectively mitigate ICI and lower the error floor, and therefore significantly improve the performances of previous researches on phase noise cancelation, especially in the presence of severe phase noise.

  15. Mitigation of tropospheric InSAR phase artifacts through differential multisquint processing

    NASA Technical Reports Server (NTRS)

    Chen, Curtis W.

    2004-01-01

    We propose a technique for mitigating tropospheric phase errors in repeat-pass interferometric synthetic aperture radar (InSAR). The mitigation technique is based upon the acquisition of multisquint InSAR data. On each satellite pass over a target area, the radar instrument will acquire images from multiple squint (azimuth) angles, from which multiple interferograms can be formed. The diversity of viewing angles associated with the multisquint acquisition can be used to solve for two components of the 3-D surface displacement vector as well as for the differential tropospheric phase. We describe a model for the performance of the multisquint technique, and we present an assessment of the performance expected.

  16. New Developments in FPGA Devices: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth; Pellish, Jonathan

    2016-01-01

    It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.

  17. 44 CFR 79.7 - Offers and appeals under the SRL program.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program FLOOD MITIGATION GRANTS § 79.7 Offers and appeals under the SRL program. (a) Consultation. States and communities... mitigation activity. These consultations shall be initiated in the early stages of the project development...

  18. 44 CFR 79.7 - Offers and appeals under the SRL program.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program FLOOD MITIGATION GRANTS § 79.7 Offers and appeals under the SRL program. (a) Consultation. States and communities... mitigation activity. These consultations shall be initiated in the early stages of the project development...

  19. 44 CFR 79.7 - Offers and appeals under the SRL program.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program FLOOD MITIGATION GRANTS § 79.7 Offers and appeals under the SRL program. (a) Consultation. States and communities... mitigation activity. These consultations shall be initiated in the early stages of the project development...

  20. 44 CFR 79.7 - Offers and appeals under the SRL program.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program FLOOD MITIGATION GRANTS § 79.7 Offers and appeals under the SRL program. (a) Consultation. States and communities... mitigation activity. These consultations shall be initiated in the early stages of the project development...

  1. Teachers' Initial and Sustained Use of an Instructional Assistive Technology Tool: Exploring the Mitigating Factors

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Flanagan, Sara; Heutsche, Anne; Okolo, Cynthia M.; Englert, Carol Sue

    2011-01-01

    This qualitative research project explored factors that mitigated teachers implementing an instructional assistive technology and factors that mitigated its sustained use. Specifically, it explored these issues in relation to a social studies based instructional assistive technology (Virtual History Museum [VHM]), which was originally implemented…

  2. Cleared for the visual approach: Human factor problems in air carrier operations

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    The study described herein, a set of 353 ASRS reports of unique aviation occurrences significantly involving visual approaches was examined to identify hazards and pitfalls embedded in the visual approach procedure and to consider operational practices that might help avoid future mishaps. Analysis of the report set identified nine aspects of the visual approach procedure that appeared to be predisposing conditions for inducing or exacerbating the effects of operational errors by flight crew members or controllers. Predisposing conditions, errors, and operational consequences of the errors are discussed. In a summary, operational policies that might mitigate the problems are examined.

  3. Investigation of adaptive filtering and MDL mitigation based on space-time block-coding for spatial division multiplexed coherent receivers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi

    2017-07-01

    In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).

  4. Normal accidents: human error and medical equipment design.

    PubMed

    Dain, Steven

    2002-01-01

    High-risk systems, which are typical of our technologically complex era, include not just nuclear power plants but also hospitals, anesthesia systems, and the practice of medicine and perfusion. In high-risk systems, no matter how effective safety devices are, some types of accidents are inevitable because the system's complexity leads to multiple and unexpected interactions. It is important for healthcare providers to apply a risk assessment and management process to decisions involving new equipment and procedures or staffing matters in order to minimize the residual risks of latent errors, which are amenable to correction because of the large window of opportunity for their detection. This article provides an introduction to basic risk management and error theory principles and examines ways in which they can be applied to reduce and mitigate the inevitable human errors that accompany high-risk systems. The article also discusses "human factor engineering" (HFE), the process which is used to design equipment/ human interfaces in order to mitigate design errors. The HFE process involves interaction between designers and endusers to produce a series of continuous refinements that are incorporated into the final product. The article also examines common design problems encountered in the operating room that may predispose operators to commit errors resulting in harm to the patient. While recognizing that errors and accidents are unavoidable, organizations that function within a high-risk system must adopt a "safety culture" that anticipates problems and acts aggressively through an anonymous, "blameless" reporting mechanism to resolve them. We must continuously examine and improve the design of equipment and procedures, personnel, supplies and materials, and the environment in which we work to reduce error and minimize its effects. Healthcare providers must take a leading role in the day-to-day management of the "Perioperative System" and be a role model in promoting a culture of safety in their organizations.

  5. Suffering in Silence: Medical Error and its Impact on Health Care Providers.

    PubMed

    Robertson, Jennifer J; Long, Brit

    2018-04-01

    All humans are fallible. Because physicians are human, unintentional errors unfortunately occur. While unintentional medical errors have an impact on patients and their families, they may also contribute to adverse mental and emotional effects on the involved provider(s). These may include burnout, lack of concentration, poor work performance, posttraumatic stress disorder, depression, and even suicidality. The objectives of this article are to 1) discuss the impact medical error has on involved provider(s), 2) provide potential reasons why medical error can have a negative impact on provider mental health, and 3) suggest solutions for providers and health care organizations to recognize and mitigate the adverse effects medical error has on providers. Physicians and other providers may feel a variety of adverse emotions after medical error, including guilt, shame, anxiety, fear, and depression. It is thought that the pervasive culture of perfectionism and individual blame in medicine plays a considerable role toward these negative effects. In addition, studies have found that despite physicians' desire for support after medical error, many physicians feel a lack of personal and administrative support. This may further contribute to poor emotional well-being. Potential solutions in the literature are proposed, including provider counseling, learning from mistakes without fear of punishment, discussing mistakes with others, focusing on the system versus the individual, and emphasizing provider wellness. Much of the reviewed literature is limited in terms of an emergency medicine focus or even regarding physicians in general. In addition, most studies are survey- or interview-based, which limits objectivity. While additional, more objective research is needed in terms of mitigating the effects of error on physicians, this review may help provide insight and support for those who feel alone in their attempt to heal after being involved in an adverse medical event. Unintentional medical error will likely always be a part of the medical system. However, by focusing on provider as well as patient health, we may be able to foster resilience in providers and improve care for patients in healthy, safe, and constructive environments. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Human Factors for More Usable and Safer Health Information Technology: Where Are We Now and Where do We Go from Here?

    PubMed

    Kushniruk, A; Nohr, C; Borycki, E

    2016-11-10

    A wide range of human factors approaches have been developed and adapted to healthcare for detecting and mitigating negative unexpected consequences associated with technology in healthcare (i.e. technology-induced errors). However, greater knowledge and wider dissemination of human factors methods is needed to ensure more usable and safer health information technology (IT) systems. This paper reports on work done by the IMIA Human Factors Working Group and discusses some successful approaches that have been applied in using human factors to mitigate negative unintended consequences of health IT. The paper addresses challenges in bringing human factors approaches into mainstream health IT development. A framework for bringing human factors into the improvement of health IT is described that involves a multi-layered systematic approach to detecting technology-induced errors at all stages of a IT system development life cycle (SDLC). Such an approach has been shown to be needed and can lead to reduced risks associated with the release of health IT systems into live use with mitigation of risks of negative unintended consequences. Negative unintended consequences of the introduction of IT into healthcare (i.e. potential for technology-induced errors) continue to be reported. It is concluded that methods and approaches from the human factors and usability engineering literatures need to be more widely applied, both in the vendor community and in local and regional hospital and healthcare settings. This will require greater efforts at dissemination and knowledge translation, as well as greater interaction between the academic and vendor communities.

  7. Mitigating TCP Degradation over Intermittent Link Failures using Intermediate Buffers

    DTIC Science & Technology

    2006-06-01

    signal strength [10]. The Preemptive routing in ad hoc networks [10] attempts to predict that a route will fail by looking at the signal power of the...when the error rate is high there are non -optimal back offs in the Retransmission Timeout. And third, in the high error situation the slow start...network storage follows. In Beck et. al. [3], Logistical Networking is outlined as a means of storing data throughout the network. End to end

  8. Anonymous broadcasting of classical information with a continuous-variable topological quantum code

    NASA Astrophysics Data System (ADS)

    Menicucci, Nicolas C.; Baragiola, Ben Q.; Demarie, Tommaso F.; Brennen, Gavin K.

    2018-03-01

    Broadcasting information anonymously becomes more difficult as surveillance technology improves, but remarkably, quantum protocols exist that enable provably traceless broadcasting. The difficulty is making scalable entangled resource states that are robust to errors. We propose an anonymous broadcasting protocol that uses a continuous-variable surface-code state that can be produced using current technology. High squeezing enables large transmission bandwidth and strong anonymity, and the topological nature of the state enables local error mitigation.

  9. Aviation.

    PubMed

    Karl, Richard C

    2009-01-01

    An increased awareness of the need for safety in medicine in general and in surgery in particular has prompted comparisons between the cockpit and the operating room. These comparisons seem to make sense but tend to be oversimplified. Attempts in healthcare to mimic programs that have been credited for the safety of commercial aviation have met with varying results. The risk here is that oversimplified application of an aviation model may result in the abandonment of good ideas in medicine. This paper describes in more depth the differences between medicine and commercial aviation: from the hiring process, through initial operating experience, recurrent training, and the management of emergencies. These programs add up to a cultural difference. Aviation assumes that personnel are subject to mistake making and that systems and culture need to be constructed to catch and mitigate error; medicine is still focused on the perfection of each individual's performance. The implications of these differences are explored.

  10. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    PubMed Central

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  11. Trends in Health Information Technology Safety: From Technology-Induced Errors to Current Approaches for Ensuring Technology Safety

    PubMed Central

    2013-01-01

    Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411

  12. Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error

    NASA Astrophysics Data System (ADS)

    Hosseinyalamdary, S.; Peter, M.

    2017-05-01

    In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  13. Mitigation of Impacts to Wetlands

    DOT National Transportation Integrated Search

    1996-06-17

    In accordance with the President's Regulatory Reinvention : Initiative, the FHWA proposes to amend its regulation outlining the : procedures to be followed in mitigating the impacts of Federal-aid : highway projects and programs to wetlands. The curr...

  14. Systematic Error Mitigation for the PIXIE Instrument

    NASA Technical Reports Server (NTRS)

    Kogut, Alan; Fixsen, Dale J.; Nagler, Peter; Tucker, Gregory

    2016-01-01

    The Primordial Ination Explorer (PIXIE) uses a nulling Fourier Transform Spectrometer to measure the absoluteintensity and linear polarization of the cosmic microwave background and diuse astrophysical foregrounds.PIXIE will search for the signature of primordial ination and will characterize distortions from a blackbodyspectrum, both to precision of a few parts per billion. Rigorous control of potential instrumental eects isrequired to take advantage of the raw sensitivity. PIXIE employs a highly symmetric design using multipledierential nulling to reduce the instrumental signature to negligible levels. We discuss the systematic errorbudget and mitigation strategies for the PIXIE mission.

  15. Estimating pixel variances in the scenes of staring sensors

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM

    2012-01-24

    A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.

  16. CEMERLL: The Propagation of an Atmosphere-Compensated Laser Beam to the Apollo 15 Lunar Array

    NASA Technical Reports Server (NTRS)

    Fugate, R. Q.; Leatherman, P. R.; Wilson, K. E.

    1997-01-01

    Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes.

  17. Diagnostic Error in Correctional Mental Health: Prevalence, Causes, and Consequences.

    PubMed

    Martin, Michael S; Hynes, Katie; Hatcher, Simon; Colman, Ian

    2016-04-01

    While they have important implications for inmates and resourcing of correctional institutions, diagnostic errors are rarely discussed in correctional mental health research. This review seeks to estimate the prevalence of diagnostic errors in prisons and jails and explores potential causes and consequences. Diagnostic errors are defined as discrepancies in an inmate's diagnostic status depending on who is responsible for conducting the assessment and/or the methods used. It is estimated that at least 10% to 15% of all inmates may be incorrectly classified in terms of the presence or absence of a mental illness. Inmate characteristics, relationships with staff, and cognitive errors stemming from the use of heuristics when faced with time constraints are discussed as possible sources of error. A policy example of screening for mental illness at intake to prison is used to illustrate when the risk of diagnostic error might be increased and to explore strategies to mitigate this risk. © The Author(s) 2016.

  18. Latent error detection: A golden two hours for detection.

    PubMed

    Saward, Justin R E; Stanton, Neville A

    2017-03-01

    Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  19. Getting the right blood to the right patient: the contribution of near-miss event reporting and barrier analysis.

    PubMed

    Kaplan, H S

    2005-11-01

    Safety and reliability in blood transfusion are not static, but are dynamic non-events. Since performance deviations continually occur in complex systems, their detection and correction must be accomplished over and over again. Non-conformance must be detected early enough to allow for recovery or mitigation. Near-miss events afford early detection of possible system weaknesses and provide an early chance at correction. National event reporting systems, both voluntary and involuntary, have begun to include near-miss reporting in their classification schemes, raising awareness for their detection. MERS-TM is a voluntary safety reporting initiative in transfusion. Currently 22 hospitals submit reports anonymously to a central database which supports analysis of a hospital's own data and that of an aggregate database. The system encourages reporting of near-miss events, where the patient is protected from receiving an unsuitable or incorrect blood component due to a planned or unplanned recovery step. MERS-TM data suggest approximately 90% of events are near-misses, with 10% caught after issue but before transfusion. Near-miss reporting may increase total reports ten-fold. The ratio of near-misses to events with harm is 339:1, consistent with other industries' ratio of 300:1, which has been proposed as a measure of reporting in event reporting systems. Use of a risk matrix and an event's relation to protective barriers allow prioritization of these events. Near-misses recovered by planned barriers occur ten times more frequently then unplanned recoveries. A bedside check of the patient's identity with that on the blood component is an essential, final barrier. How the typical two person check is performed, is critical. Even properly done, this check is ineffective against sampling and testing errors. Blood testing at bedside just prior to transfusion minimizes the risk of such upstream events. However, even with simple and well designed devices, training may be a critical issue. Sample errors account for more than half of reported events. The most dangerous miscollection is a blood sample passing acceptance with no previous patient results for comparison. Bar code labels or collection of a second sample may counter this upstream vulnerability. Further upstream barriers have been proposed to counter the precariousness of urgent blood sample collection in a changing unstable situation. One, a linking device, allows safer labeling of tubes away from the bedside, the second, a forcing function, prevents omission of critical patient identification steps. Errors in the blood bank itself account for 15% of errors with a high potential severity. In one such event, a component incorrectly issued, but safely detected prior to transfusion, focused attention on multitasking's contribution to laboratory error. In sum, use of near-miss information, by enhancing barriers supporting error prevention and mitigation, increases our capacity to get the right blood to the right patient.

  20. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  1. Power budget of direct-detection ultra-dense WDM-Nyquist-SCM PON with low-complexity SSBI mitigation

    NASA Astrophysics Data System (ADS)

    Soeiro, Ricardo O. J.; Alves, Tiago M. F.; Cartaxo, Adolfo V. T.

    2017-07-01

    The power budget (PB) of a direct-detection ultra-dense wavelength division/subcarrier multiplexing (SCM) passive optical network (PON) is assessed numerically for downstream, when a low-complexity iterative signal-to-signal beat interference (SSBI) mitigation technique is employed. Each SCM signal, inserted in a 12.5 GHz width optical channel, is comprised of two or three electrically generated and multiplexed 16-quadrature-amplitude-modulation (QAM) or 32-QAM Nyquist pulse-shaped subcarriers, each with a 7% forward error correction bit rate of 10.7 Gbit/s. The PB and maximum number of optical network units (ONUs) served by each optical line terminal (OLT) are compared with and without SSBI mitigation. When SSBI mitigation is realized, PB gains up to 4.5 dB are attained relative to the PB in the absence of SSBI mitigation. The PB gain enabled by the SSBI mitigation technique proposed in this work increases the number of ONUs served per OLT at least by a factor of 2, for the cases of higher spectral efficiency. In particular, for a SCM signal comprised of three subcarriers, the maximum number of ONUs served per OLT is between 2 and 32, and between 8 and 64, in the absence of SSBI mitigation, and when SSBI mitigation is employed, respectively, depending on the fiber length (up to 50 km) and order of QAM.

  2. Calibration of Ocean Forcing with satellite Flux Estimates (COFFEE)

    NASA Astrophysics Data System (ADS)

    Barron, Charlie; Jan, Dastugue; Jackie, May; Rowley, Clark; Smith, Scott; Spence, Peter; Gremes-Cordero, Silvia

    2016-04-01

    Predicting the evolution of ocean temperature in regional ocean models depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. Within the COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates, real-time satellite observations are used to estimate shortwave, longwave, sensible, and latent air-sea heat flux corrections to a background estimate from the prior day's regional or global model forecast. These satellite-corrected fluxes are used to prepare a corrected ocean hindcast and to estimate flux error covariances to project the heat flux corrections for a 3-5 day forecast. In this way, satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. While traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle, COFFEE endeavors to appropriately partition and reduce among various surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using operational global or regional atmospheric forcing. Experiment cases combine different levels of flux calibration with assimilation alternatives. The cases use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is equally applicable to other regions. These approaches within a 3DVAR application are anticipated to be useful for global and larger regional domains where a full 4DVAR methodology may be cost-prohibitive.

  3. Effect of initial phase on error in electron energy obtained using paraxial approximation for a focused laser pulse in vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Kunwar Pal, E-mail: k-psingh@yahoo.com; Department of Physics, Shri Venkateshwara University, Gajraula, Amroha, Uttar Pradesh 244236; Arya, Rashmi

    2015-09-14

    We have investigated the effect of initial phase on error in electron energy obtained using paraxial approximation to study electron acceleration by a focused laser pulse in vacuum using a three dimensional test-particle simulation code. The error is obtained by comparing the energy of the electron for paraxial approximation and seventh-order correction description of the fields of Gaussian laser. The paraxial approximation predicts wrong laser divergence and wrong electron escape time from the pulse which leads to prediction of higher energy. The error shows strong phase dependence for the electrons lying along the axis of the laser for linearly polarizedmore » laser pulse. The relative error may be significant for some specific values of initial phase even at moderate values of laser spot sizes. The error does not show initial phase dependence for a circularly laser pulse.« less

  4. A Mitigation Process for Impacts of the All American Pipeline on Oak Woodlands in Santa Barbara County

    Treesearch

    Germaine Reyes-French; Timothy J. Cohen

    1991-01-01

    This paper outlines a mitigation program for pipeline construction impacts to oak tree habitat by describing the requirements for the Offsite Oak Mitigation Program for the All American Pipeline (AAPL) in Santa Barbara County, California. After describing the initial environmental analysis, the County regulatory structure is described under which the plan was required...

  5. Prediction of tropical cyclone over North Indian Ocean using WRF model: sensitivity to scatterometer winds, ATOVS and ATMS radiances

    NASA Astrophysics Data System (ADS)

    Dodla, Venkata B.; Srinivas, Desamsetti; Dasari, Hari Prasad; Gubbala, Chinna Satyanarayana

    2016-05-01

    Tropical cyclone prediction, in terms of intensification and movement, is important for disaster management and mitigation. Hitherto, research studies were focused on this issue that lead to improvement in numerical models, initial data with data assimilation, physical parameterizations and application of ensemble prediction. Weather Research and Forecasting (WRF) model is the state-of-art model for cyclone prediction. In the present study, prediction of tropical cyclone (Phailin, 2013) that formed in the North Indian Ocean (NIO) with and without data assimilation using WRF model has been made to assess impacts of data assimilation. WRF model was designed to have nested two domains of 15 and 5 km resolutions. In the present study, numerical experiments are made without and with the assimilation of scatterometer winds, and radiances from ATOVS and ATMS. The model performance was assessed in respect to the movement and intensification of cyclone. ATOVS data assimilation experiment had produced the best prediction with least errors less than 100 km up to 60 hours and producing pre-deepening and deepening periods accurately. The Control and SCAT wind assimilation experiments have shown good track but the errors were 150-200 km and gradual deepening from the beginning itself instead of sudden deepening.

  6. Technique for diamond machining large ZnSe grisms for the Rapid Infrared/Imager Spectrograph (RIMAS)

    NASA Astrophysics Data System (ADS)

    Kuzmenko, Paul J.; Little, Steve L.; Kutyrev, Alexander S.; Capone, John I.

    2016-07-01

    The Rapid Infrared Imager/Spectrograph (RIMAS) is an instrument designed to observe gamma ray burst afterglows following initial detection by the SWIFT satellite. Operating in the near infrared between 0.9 and 2.4 μm, it has capabilities for both low resolution (R 25) and moderate resolution (R 4000) spectroscopy. Two zinc selenide (ZnSe) grisms provide dispersion in the moderate resolution mode: one covers the Y and J bands and the other covers the H and K. Each has a clear aperture of 44 mm. The YJ grism has a blaze angle of 49.9° with a 40 μm groove spacing. The HK grism is blazed at 43.1° with a 50 μm grooves spacing. Previous fabrication of ZnSe grisms on the Precision Engineering Research Lathe (PERL II) at LLNL has demonstrated the importance of surface preparation, tool and fixture design, tight thermal control, and backup power sources for the machine. The biggest challenges in machining the RIMAS grisms are the large grooved area, which indicates long machining time, and the relatively steep blaze angle, which means that the grism wavefront error is much more sensitive to lathe metrology errors. Mitigating techniques are described.

  7. Current Issues in Orbital Debris

    NASA Technical Reports Server (NTRS)

    Johnson, Nicholas L.

    2011-01-01

    During the past two decades, great strides have been made in the international community regarding orbital debris mitigation. The majority of space-faring nations have reached a consensus on an initial set of orbital debris mitigation measures. Implementation of and compliance with the IADC and UN space debris mitigation guidelines should remain a high priority. Improvements of the IADC and UN space debris mitigation guidelines should continue as technical consensus permits. The remediation of the near-Earth space environment will require a significant and long-term undertaking.

  8. Reducing Wrong Patient Selection Errors: Exploring the Design Space of User Interface Techniques

    PubMed Central

    Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben

    2014-01-01

    Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients’ identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed. PMID:25954415

  9. Reducing wrong patient selection errors: exploring the design space of user interface techniques.

    PubMed

    Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben

    2014-01-01

    Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients' identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed.

  10. Spatial autocorrelation among automated geocoding errors and its effects on testing for disease clustering

    PubMed Central

    Li, Jie; Fang, Xiangming

    2010-01-01

    Automated geocoding of patient addresses is an important data assimilation component of many spatial epidemiologic studies. Inevitably, the geocoding process results in positional errors. Positional errors incurred by automated geocoding tend to reduce the power of tests for disease clustering and otherwise affect spatial analytic methods. However, there are reasons to believe that the errors may often be positively spatially correlated and that this may mitigate their deleterious effects on spatial analyses. In this article, we demonstrate explicitly that the positional errors associated with automated geocoding of a dataset of more than 6000 addresses in Carroll County, Iowa are spatially autocorrelated. Furthermore, through two simulation studies of disease processes, including one in which the disease process is overlain upon the Carroll County addresses, we show that spatial autocorrelation among geocoding errors maintains the power of two tests for disease clustering at a level higher than that which would occur if the errors were independent. Implications of these results for cluster detection, privacy protection, and measurement-error modeling of geographic health data are discussed. PMID:20087879

  11. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Eas M.

    2003-01-01

    The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.

  12. Addressee Errors in ATC Communications: The Call Sign Problem

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  13. Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.

    PubMed

    Van, Anh T; Hernando, Diego; Sutton, Bradley P

    2011-11-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.

  14. Improvement of Klobuchar model for GNSS single-frequency ionospheric delay corrections

    NASA Astrophysics Data System (ADS)

    Wang, Ningbo; Yuan, Yunbin; Li, Zishen; Huo, Xingliang

    2016-04-01

    Broadcast ionospheric model is currently an effective approach to mitigate the ionospheric time delay for real-time Global Navigation Satellite System (GNSS) single-frequency users. Klobuchar coefficients transmitted in Global Positioning System (GPS) navigation message have been widely used in various GNSS positioning and navigation applications; however, this model can only reduce the ionospheric error by approximately 50% in mid-latitudes. With the emerging BeiDou and Galileo, as well as the modernization of GPS and GLONASS, more precise ionospheric correction models or algorithms are required by GNSS single-frequency users. Numerical analysis of the initial phase and nighttime term in Klobuchar algorithm demonstrates that more parameters should be introduced to better describe the variation of nighttime ionospheric total electron content (TEC). In view of this, several schemes are proposed for the improvement of Klobuchar algorithm. Performance of these improved Klobuchar-like models are validated over the continental and oceanic regions during high (2002) and low (2006) levels of solar activities, respectively. Over the continental region, GPS TEC generated from 35 International GNSS Service (IGS) and the Crust Movement Observation Network of China (CMONOC) stations are used as references. Over the oceanic region, TEC data from TOPEX/Poseidon and JASON-1 altimeters are used for comparison. A ten-parameter Klobuchar-like model, which describes the nighttime term as a linear function of geomagnetic latitude, is finally proposed for GNSS single-frequency ionospheric corrections. Compared to GPS TEC, while GPS broadcast model can correct for 55.0% and 49.5% of the ionospheric delay for the year 2002 and 2006, respectively, the proposed ten-parameter Klobuchar-like model can reduce the ionospheric error by 68.4% and 64.7% for the same period. Compared to TOPEX/Poseidon and JASON-1 TEC, the improved ten-parameter Klobuchar-like model can mitigate the ionospheric delay by 61.1% and 64.3% in 2002 and 2006, respectively.

  15. Evaluation of three lidar scanning strategies for turbulence measurements

    NASA Astrophysics Data System (ADS)

    Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.

    2015-11-01

    Several errors occur when a traditional Doppler-beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.

  16. Evaluation of three lidar scanning strategies for turbulence measurements

    NASA Astrophysics Data System (ADS)

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; Sathe, Ameya; Bonin, Timothy A.; Chilson, Phillip B.; Muschinski, Andreas

    2016-05-01

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.

  17. Performance analysis of decode-and-forward dual-hop optical spatial modulation with diversity combiner over atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Odeyemi, Kehinde O.; Owolawi, Pius A.; Srivastava, Viranjay M.

    2017-11-01

    Dual-hops transmission is a growing interest technique that can be used to mitigate against atmospheric turbulence along the Free Space Optical (FSO) communication links. This paper analyzes the performance of Decode-and-Forward (DF) dual-hops FSO systems in-conjunction with spatial modulation and diversity combiners over a Gamma-Gamma atmospheric turbulence channel using heterodyne detection. Maximum Ratio Combiner (MRC), Equal Gain Combiner (EGC) and Selection Combiner (SC) are considered at the relay and destination as mitigation tools to improve the system error performance. Power series expansion of modified Bessel function is used to derive the closed form expression for the end-to-end Average Pairwise Error Probability (APEP) expressions for each of the combiners under study and a tight upper bound on the Average Bit Error Rate (ABER) per hop is given. Thus, the overall end-to-end ABER for the dual-hops FSO system is then evaluated. The numerical results depicted that dual-hops transmission systems outperformed the direct link systems. Moreover, the impact of having the same and different combiners at the relay and destination are also presented. The results also confirm that the combination of dual hops transmission with spatial modulation and diversity combiner significantly improves the systems error rate with the MRC combiner offering an optimal performance with respect to variation in atmospheric turbulence, change in links average received SNR and link range of the system.

  18. Sensitivity of mesoscale-model forecast skill to some initial-data characteristics, data density, data position, analysis procedure and measurement error

    NASA Technical Reports Server (NTRS)

    Warner, Thomas T.; Key, Lawrence E.; Lario, Annette M.

    1989-01-01

    The effects of horizontal and vertical data resolution, data density, data location, different objective analysis algorithms, and measurement error on mesoscale-forecast accuracy are studied with observing-system simulation experiments. Domain-averaged errors are shown to generally decrease with time. It is found that the vertical distribution of error growth depends on the initial vertical distribution of the error itself. Larger gravity-inertia wave noise is produced in forecasts with coarser vertical data resolution. The use of a low vertical resolution observing system with three data levels leads to more forecast errors than moderate and high vertical resolution observing systems with 8 and 14 data levels. Also, with poor vertical resolution in soundings, the initial and forecast errors are not affected by the horizontal data resolution.

  19. Characterization of Signal Quality Monitoring Techniques for Multipath Detection in GNSS Applications.

    PubMed

    Pirsiavash, Ali; Broumandan, Ali; Lachapelle, Gérard

    2017-07-05

    The performance of Signal Quality Monitoring (SQM) techniques under different multipath scenarios is analyzed. First, SQM variation profiles are investigated as critical requirements in evaluating the theoretical performance of SQM metrics. The sensitivity and effectiveness of SQM approaches for multipath detection and mitigation are then defined and analyzed by comparing SQM profiles and multipath error envelopes for different discriminators. Analytical discussions includes two discriminator strategies, namely narrow and high resolution correlator techniques for BPSK(1), and BOC(1,1) signaling schemes. Data analysis is also carried out for static and kinematic scenarios to validate the SQM profiles and examine SQM performance in actual multipath environments. Results show that although SQM is sensitive to medium and long-delay multipath, its effectiveness in mitigating these ranges of multipath errors varies based on tracking strategy and signaling scheme. For short-delay multipath scenarios, the multipath effect on pseudorange measurements remains mostly undetected due to the low sensitivity of SQM metrics.

  20. Future management strategies for state maintained wetlands and stream mitigation sites.

    DOT National Transportation Integrated Search

    2008-06-01

    This study was to identify existing wetland/stream mitigation sites owned by the Kentucky Transportation Cabinet (KYTC) and assess performance of those sites including problems encountered and maintenance costs. Initial work determined that KYTC Dist...

  1. Disclosure of Medical Errors in Oman

    PubMed Central

    Norrish, Mark I. K.

    2015-01-01

    Objectives: This study aimed to provide insight into the preferences for and perceptions of medical error disclosure (MED) by members of the public in Oman. Methods: Between January and June 2012, an online survey was used to collect responses from 205 members of the public across five governorates of Oman. Results: A disclosure gap was revealed between the respondents’ preferences for MED and perceived current MED practices in Oman. This disclosure gap extended to both the type of error and the person most likely to disclose the error. Errors resulting in patient harm were found to have a strong influence on individuals’ perceived quality of care. In addition, full disclosure was found to be highly valued by respondents and able to mitigate for a perceived lack of care in cases where medical errors led to damages. Conclusion: The perceived disclosure gap between respondents’ MED preferences and perceptions of current MED practices in Oman needs to be addressed in order to increase public confidence in the national health care system. PMID:26052463

  2. An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation

    NASA Astrophysics Data System (ADS)

    Lin, Tsungpo

    Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.

  3. Management Strategies to Facilitate Optimal Outcomes for Patients Treated with Delayed-release Dimethyl Fumarate.

    PubMed

    Mayer, Lori; Fink, Mary Kay; Sammarco, Carrie; Laing, Lisa

    2018-04-01

    Delayed-release dimethyl fumarate is an oral disease-modifying therapy that has demonstrated significant efficacy in adults with relapsing-remitting multiple sclerosis. Incidences of flushing and gastrointestinal adverse events are common in the first month after delayed-release dimethyl fumarate initiation. Our objective was to propose mitigation strategies for adverse events related to initiation of delayed-release dimethyl fumarate in the treatment of patients with multiple sclerosis. Studies of individually developed mitigation strategies and chart reviews were evaluated. Those results, as well as mitigation protocols developed at multiple sclerosis care centers, are summarized. Key steps to optimize the effectiveness of delayed-release dimethyl fumarate treatment include education prior to and at the time of delayed-release dimethyl fumarate initiation, initiation dose protocol gradually increasing to maintenance dose, dietary suggestions for co-administration with food, gastrointestinal symptom management with over-the-counter medications, flushing symptom management with aspirin, and temporary dose reduction. Using the available evidence from clinical trials and evaluations of post-marketing studies, these strategies to manage gastrointestinal and flushing symptoms can be effective and helpful to the patient when initiating delayed-release dimethyl fumarate.

  4. Improved accuracy of ultrasound-guided therapies using electromagnetic tracking: in-vivo speed of sound measurements

    NASA Astrophysics Data System (ADS)

    Samboju, Vishal; Adams, Matthew; Salgaonkar, Vasant; Diederich, Chris J.; Cunha, J. Adam M.

    2017-02-01

    The speed of sound (SOS) for ultrasound devices used for imaging soft tissue is often calibrated to water, 1540 m/s1 , despite in-vivo soft tissue SOS varying from 1450 to 1613 m/s2 . Images acquired with 1540 m/s and used in conjunction with stereotactic external coordinate systems can thus result in displacement errors of several millimeters. Ultrasound imaging systems are routinely used to guide interventional thermal ablation and cryoablation devices, or radiation sources for brachytherapy3 . Brachytherapy uses small radioactive pellets, inserted interstitially with needles under ultrasound guidance, to eradicate cancerous tissue4 . Since the radiation dose diminishes with distance from the pellet as 1/r2 , imaging uncertainty of a few millimeters can result in significant erroneous dose delivery5,6. Likewise, modeling of power deposition and thermal dose accumulations from ablative sources are also prone to errors due to placement offsets from SOS errors7 . This work presents a method of mitigating needle placement error due to SOS variances without the need of ionizing radiation2,8. We demonstrate the effects of changes in dosimetry in a prostate brachytherapy environment due to patientspecific SOS variances and the ability to mitigate dose delivery uncertainty. Electromagnetic (EM) sensors embedded in the brachytherapy ultrasound system provide information regarding 3D position and orientation of the ultrasound array. Algorithms using data from these two modalities are used to correct bmode images to account for SOS errors. While ultrasound localization resulted in >3 mm displacements, EM resolution was verified to <1 mm precision using custom-built phantoms with various SOS, showing 1% accuracy in SOS measurement.

  5. Turbulence heterodyne coherent mitigation of orbital angular momentum multiplexing in a free space optical link by auxiliary light.

    PubMed

    Yang, Chunyong; Xu, Chuang; Ni, Wenjun; Gan, Yu; Hou, Jin; Chen, Shaoping

    2017-10-16

    A novel scheme is proposed to mitigate the atmospheric turbulence effect in free space optical (FSO) communication employing orbital angular momentum (OAM) multiplexing. In this scheme, the Gaussian beam is used as an auxiliary light with a common-path to obtain the distortion information caused by atmospheric turbulence. After turbulence, the heterodyne coherent detection technology is demonstrated to realize the turbulence mitigation. With the same turbulence distortion, the OAM beams and the Gaussian beam are respectively utilized as the signal light and the local oscillation light. Then the turbulence distortion is counteracted to a large extent. Meanwhile, a phase matching method is proposed to select the specific OAM mode. The discrimination between the neighboring OAM modes is obviously improved by detecting the output photocurrent. Moreover, two methods of beam size adjustment have been analyzed to achieve better performance for turbulence mitigation. Numerical results show that the system bit error rate (BER) can reach 10 -5 under strong turbulence in simulation situation.

  6. Human factors engineering and design validation for the redesigned follitropin alfa pen injection device.

    PubMed

    Mahony, Mary C; Patterson, Patricia; Hayward, Brooke; North, Robert; Green, Dawne

    2015-05-01

    To demonstrate, using human factors engineering (HFE), that a redesigned, pre-filled, ready-to-use, pre-asembled follitropin alfa pen can be used to administer prescribed follitropin alfa doses safely and accurately. A failure modes and effects analysis identified hazards and harms potentially caused by use errors; risk-control measures were implemented to ensure acceptable device use risk management. Participants were women with infertility, their significant others, and fertility nurse (FN) professionals. Preliminary testing included 'Instructions for Use' (IFU) and pre-validation studies. Validation studies used simulated injections in a representative use environment; participants received prior training on pen use. User performance in preliminary testing led to IFU revisions and a change to outer needle cap design to mitigate needle stick potential. In the first validation study (49 users, 343 simulated injections), in the FN group, one observed critical use error resulted in a device design modification and another in an IFU change. A second validation study tested the mitigation strategies; previously reported use errors were not repeated. Through an iterative process involving a series of studies, modifications were made to the pen design and IFU. Simulated-use testing demonstrated that the redesigned pen can be used to administer follitropin alfa effectively and safely.

  7. Error tracking control for underactuated overhead cranes against arbitrary initial payload swing angles

    NASA Astrophysics Data System (ADS)

    Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin

    2017-02-01

    This paper exploits an error tracking control method for overhead crane systems for which the error trajectories for the trolley and the payload swing can be pre-specified. The proposed method does not require that the initial payload swing angle remains zero, whereas this requirement is usually assumed in conventional methods. The significant feature of the proposed method is its superior control performance as well as its strong robustness over different or uncertain rope lengths, payload masses, desired positions, initial payload swing angles, and external disturbances. Owing to the same attenuation behavior, the desired error trajectory for the trolley for each traveling distance is not needed to be reset, which is easy to implement in practical applications. By converting the error tracking overhead crane dynamics to the objective system, we obtain the error tracking control law for arbitrary initial payload swing angles. Lyapunov techniques and LaSalle's invariance theorem are utilized to prove the convergence and stability of the closed-loop system. Simulation and experimental results are illustrated to validate the superior performance of the proposed error tracking control method.

  8. Human error and human factors engineering in health care.

    PubMed

    Welch, D L

    1997-01-01

    Human error is inevitable. It happens in health care systems as it does in all other complex systems, and no measure of attention, training, dedication, or punishment is going to stop it. The discipline of human factors engineering (HFE) has been dealing with the causes and effects of human error since the 1940's. Originally applied to the design of increasingly complex military aircraft cockpits, HFE has since been effectively applied to the problem of human error in such diverse systems as nuclear power plants, NASA spacecraft, the process control industry, and computer software. Today the health care industry is becoming aware of the costs of human error and is turning to HFE for answers. Just as early experimental psychologists went beyond the label of "pilot error" to explain how the design of cockpits led to air crashes, today's HFE specialists are assisting the health care industry in identifying the causes of significant human errors in medicine and developing ways to eliminate or ameliorate them. This series of articles will explore the nature of human error and how HFE can be applied to reduce the likelihood of errors and mitigate their effects.

  9. Deliver a set of tools for resolving bad inductive loops and correcting bad data.

    DOT National Transportation Integrated Search

    2012-04-01

    This project prototyped and demonstrated procedures to find and mitigate loop detector errors, and to derive more valuable data from loops. Specifically, methods were developed to find and isolate out loop data which is "bad" or invalid, so that miti...

  10. Deliver a set of tools for resolving bad inductive loops and correcting bad data

    DOT National Transportation Integrated Search

    2012-04-10

    This project prototyped and demonstrated procedures to find and mitigate loop detector errors, and to derive more valuable data from loops. Specifically, methods were developed to find and isolate out loop data which is "bad" or invalid, so that miti...

  11. Design Techniques for Power-Aware Combinational Logic SER Mitigation

    NASA Astrophysics Data System (ADS)

    Mahatme, Nihaar N.

    The history of modern semiconductor devices and circuits suggests that technologists have been able to maintain scaling at the rate predicted by Moore's Law [Moor-65]. With improved performance, speed and lower area, technology scaling has also exacerbated reliability issues such as soft errors. Soft errors are transient errors that occur in microelectronic circuits due to ionizing radiation particle strikes on reverse biased semiconductor junctions. These radiation induced errors at the terrestrial-level are caused due to radiation particle strikes by (1) alpha particles emitted as decay products of packing material (2) cosmic rays that produce energetic protons and neutrons, and (3) thermal neutrons [Dodd-03], [Srou-88] and more recently muons and electrons [Ma-79] [Nara-08] [Siew-10] [King-10]. In the space environment radiation induced errors are a much bigger threat and are mainly caused by cosmic heavy-ions, protons etc. The effects of radiation exposure on circuits and measures to protect against them have been studied extensively for the past 40 years, especially for parts operating in space. Radiation particle strikes can affect memory as well as combinational logic. Typically when these particles strike semiconductor junctions of transistors that are part of feedback structures such as SRAM memory cells or flip-flops, it can lead to an inversion of the cell content. Such a failure is formally called a bit-flip or single-event upset (SEU). When such particles strike sensitive junctions part of combinational logic gates they produce transient voltage spikes or glitches called single-event transients (SETs) that could be latched by receiving flip-flops. As the circuits are clocked faster, there are more number of clocking edges which increases the likelihood of latching these transients. In older technology generations the probability of errors in flip-flops due to SETs being latched was much lower compared to direct strikes on flip-flops or SRAMs leading to SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.

  12. A Nonlinear Adaptive Filter for Gyro Thermal Bias Error Cancellation

    NASA Technical Reports Server (NTRS)

    Galante, Joseph M.; Sanner, Robert M.

    2012-01-01

    Deterministic errors in angular rate gyros, such as thermal biases, can have a significant impact on spacecraft attitude knowledge. In particular, thermal biases are often the dominant error source in MEMS gyros after calibration. Filters, such as J\\,fEKFs, are commonly used to mitigate the impact of gyro errors and gyro noise on spacecraft closed loop pointing accuracy, but often have difficulty in rapidly changing thermal environments and can be computationally expensive. In this report an existing nonlinear adaptive filter is used as the basis for a new nonlinear adaptive filter designed to estimate and cancel thermal bias effects. A description of the filter is presented along with an implementation suitable for discrete-time applications. A simulation analysis demonstrates the performance of the filter in the presence of noisy measurements and provides a comparison with existing techniques.

  13. Medical Errors Reduction Initiative

    DTIC Science & Technology

    2005-05-01

    working with great success to minimize error. 14. SUBJECT TERMS 15. NUMBER OF PAGES Medical Error, Patient Safety, Personal Data Terminal, Barcodes, 9...AD Award Number: W81XWH-04-1-0536 TITLE: Medical Errors Reduction Initiative PRINCIPAL INVESTIGATOR: Michael L. Mutter 1To CONTRACTING ORGANIZATION...The Valley Hospital Ridgewood, NJ 07450 REPORT DATE: May 2005 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army Medical Research and Materiel Command

  14. Scalable and fail-safe deployment of the ATLAS Distributed Data Management system Rucio

    NASA Astrophysics Data System (ADS)

    Lassnig, M.; Vigne, R.; Beermann, T.; Barisits, M.; Garonne, V.; Serfon, C.

    2015-12-01

    This contribution details the deployment of Rucio, the ATLAS Distributed Data Management system. The main complication is that Rucio interacts with a wide variety of external services, and connects globally distributed data centres under different technological and administrative control, at an unprecedented data volume. It is therefore not possible to create a duplicate instance of Rucio for testing or integration. Every software upgrade or configuration change is thus potentially disruptive and requires fail-safe software and automatic error recovery. Rucio uses a three-layer scaling and mitigation strategy based on quasi-realtime monitoring. This strategy mainly employs independent stateless services, automatic failover, and service migration. The technologies used for deployment and mitigation include OpenStack, Puppet, Graphite, HAProxy and Apache. In this contribution, the interplay between these components, their deployment, software mitigation, and the monitoring strategy are discussed.

  15. Novel prescribed performance neural control of a flexible air-breathing hypersonic vehicle with unknown initial errors.

    PubMed

    Bu, Xiangwei; Wu, Xiaoyan; Zhu, Fujing; Huang, Jiaqi; Ma, Zhen; Zhang, Rui

    2015-11-01

    A novel prescribed performance neural controller with unknown initial errors is addressed for the longitudinal dynamic model of a flexible air-breathing hypersonic vehicle (FAHV) subject to parametric uncertainties. Different from traditional prescribed performance control (PPC) requiring that the initial errors have to be known accurately, this paper investigates the tracking control without accurate initial errors via exploiting a new performance function. A combined neural back-stepping and minimal learning parameter (MLP) technology is employed for exploring a prescribed performance controller that provides robust tracking of velocity and altitude reference trajectories. The highlight is that the transient performance of velocity and altitude tracking errors is satisfactory and the computational load of neural approximation is low. Finally, numerical simulation results from a nonlinear FAHV model demonstrate the efficacy of the proposed strategy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Modeling, Analyzing, and Mitigating Dissonance Between Alerting Systems

    NASA Technical Reports Server (NTRS)

    Song, Lixia; Kuchar, James K.

    2003-01-01

    Alerting systems are becoming pervasive in process operations, which may result in the potential for dissonance or conflict in information from different alerting systems that suggests different threat levels and/or actions to resolve hazards. Little is currently available to help in predicting or solving the dissonance problem. This thesis presents a methodology to model and analyze dissonance between alerting systems, providing both a theoretical foundation for understanding dissonance and a practical basis from which specific problems can be addressed. A state-space representation of multiple alerting system operation is generalized that can be tailored across a variety of applications. Based on the representation, two major causes of dissonance are identified: logic differences and sensor error. Additionally, several possible types of dissonance are identified. A mathematical analysis method is developed to identify the conditions for dissonance originating from logic differences. A probabilistic analysis methodology is developed to estimate the probability of dissonance originating from sensor error, and to compare the relative contribution to dissonance of sensor error against the contribution from logic differences. A hybrid model, which describes the dynamic behavior of the process with multiple alerting systems, is developed to identify dangerous dissonance space, from which the process can lead to disaster. Methodologies to avoid or mitigate dissonance are outlined. Two examples are used to demonstrate the application of the methodology. First, a conceptual In-Trail Spacing example is presented. The methodology is applied to identify the conditions for possible dissonance, to identify relative contribution of logic difference and sensor error, and to identify dangerous dissonance space. Several proposed mitigation methods are demonstrated in this example. In the second example, the methodology is applied to address the dissonance problem between two air traffic alert and avoidance systems: the existing Traffic Alert and Collision Avoidance System (TCAS) vs. the proposed Airborne Conflict Management system (ACM). Conditions on ACM resolution maneuvers are identified to avoid dynamic dissonance between TCAS and ACM. Also included in this report is an Appendix written by Lee Winder about recent and continuing work on alerting systems design. The application of Markov Decision Process (MDP) theory to complex alerting problems is discussed and illustrated with an abstract example system.

  17. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  18. Fusing Observations and Model Results for Creation of Enhanced Ozone Spatial Fields: Comparison of Three Techniques

    EPA Science Inventory

    This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...

  19. 32 CFR 724.806 - Decisional issues.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the exercise of discretion on the issue of equity in the applicant's case. (ii) If a reason is based... should exercise its equitable powers to change the discharge on the basis of the alleged error. If it..., specific circumstances surrounding the offense, number of offenses, lack of mitigating circumstances, or...

  20. A Comprehensive Quality Assurance Program for Personnel and Procedures in Radiation Oncology: Value of Voluntary Error Reporting and Checklists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalapurakal, John A., E-mail: j-kalapurakal@northwestern.edu; Zafirovski, Aleksandar; Smith, Jeffery

    Purpose: This report describes the value of a voluntary error reporting system and the impact of a series of quality assurance (QA) measures including checklists and timeouts on reported error rates in patients receiving radiation therapy. Methods and Materials: A voluntary error reporting system was instituted with the goal of recording errors, analyzing their clinical impact, and guiding the implementation of targeted QA measures. In response to errors committed in relation to treatment of the wrong patient, wrong treatment site, and wrong dose, a novel initiative involving the use of checklists and timeouts for all staff was implemented. The impactmore » of these and other QA initiatives was analyzed. Results: From 2001 to 2011, a total of 256 errors in 139 patients after 284,810 external radiation treatments (0.09% per treatment) were recorded in our voluntary error database. The incidence of errors related to patient/tumor site, treatment planning/data transfer, and patient setup/treatment delivery was 9%, 40.2%, and 50.8%, respectively. The compliance rate for the checklists and timeouts initiative was 97% (P<.001). These and other QA measures resulted in a significant reduction in many categories of errors. The introduction of checklists and timeouts has been successful in eliminating errors related to wrong patient, wrong site, and wrong dose. Conclusions: A comprehensive QA program that regularly monitors staff compliance together with a robust voluntary error reporting system can reduce or eliminate errors that could result in serious patient injury. We recommend the adoption of these relatively simple QA initiatives including the use of checklists and timeouts for all staff to improve the safety of patients undergoing radiation therapy in the modern era.« less

  1. Optimal reentry prediction of space objects from LEO using RSM and GA

    NASA Astrophysics Data System (ADS)

    Mutyalarao, M.; Raj, M. Xavier James

    2012-07-01

    The accurate estimation of the orbital life time (OLT) of decaying near-Earth objects is of considerable importance for the prediction of risk object re-entry time and hazard assessment as well as for mitigation strategies. Recently, due to the reentries of large number of risk objects, which poses threat to the human life and property, a great concern is developed in the space scientific community all over the World. The evolution of objects in Low Earth Orbit (LEO) is determined by a complex interplay of the perturbing forces, mainly due to atmospheric drag and Earth gravity. These orbits are mostly in low eccentric (eccentricity < 0.2) and have variations in perigee and apogee altitudes due to perturbations during a revolution. The changes in the perigee and apogee altitudes of these orbits are mainly due to the gravitational perturbations of the Earth and the atmospheric density. It has become necessary to use extremely complex force models to match with the present operational requirements and observational techniques. Further the re-entry time of the objects in such orbits is sensitive to the initial conditions. In this paper the problem of predicting re-entry time is attempted as an optimal estimation problem. It is known that the errors are more in eccentricity for the observations based on two line elements (TLEs). Thus two parameters, initial eccentricity and ballistic coefficient, are chosen for optimal estimation. These two parameters are computed with response surface method (RSM) using a genetic algorithm (GA) for the selected time zones, based on rough linear variation of response parameter, the mean semi-major axis during orbit evolution. Error minimization between the observed and predicted mean Semi-major axis is achieved by the application of an optimization algorithm such as Genetic Algorithm (GA). The basic feature of the present approach is that the model and measurement errors are accountable in terms of adjusting the ballistic coefficient and eccentricity. The methodology is tested with the recently reentered objects ROSAT and PHOBOS GRUNT satellites. The study reveals a good agreement with the actual reentry time of these objects. It is also observed that the absolute percentage error in re-entry prediction time for all the two objects is found to be very less. Keywords: low eccentric, Response surface method, Genetic algorithm, apogee altitude, Ballistic coefficient

  2. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)

    2002-01-01

    One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.

  3. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  4. Wearable Oximetry for Harsh Environments

    DTIC Science & Technology

    characterize the types and significance of motion artifacts that will need to be mitigated. The forehead was confirmed to be an excellent site with...respect to signal quality, but signal corruption from changes in contact pressure will need to be mitigated. The sternal locations are initially assessed

  5. Human milk and breastfeeding: An intervention to mitigate toxic stress.

    PubMed

    Hallowell, Sunny G; Froh, Elizabeth B; Spatz, Diane L

    The American Academy of Nursing has identified toxic stress in childhood as a health policy concern of high priority. Adult diseases (e.g., obesity, diabetes, hypertension and cardiovascular disease) should be viewed as developmental disorders that begin early in life that could be reduced with the alleviation of toxic stress in childhood. The provision of human milk/breastfeeding is an evidence-based intervention that may hold the greatest potential to mitigate the effects of toxic stress from the moment of birth. Assisting families to make an informed choice to initiate and continue breastfeeding from birth has the potential to address both the disparity in the quality of nutrition provided infants and the economic stress experienced by families who purchase formula. The Expert Panel on Breastfeeding endorses initiatives to improve the initiation, duration, and exclusivity of breastfeeding to mitigate the effects of toxic stress in this call to action for research to build the evidence to support these critical relationships. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Evaluation and mitigation of potential errors in radiochromic film dosimetry due to film curvature at scanning.

    PubMed

    Palmer, Antony L; Bradley, David A; Nisbet, Andrew

    2015-03-08

    This work considers a previously overlooked uncertainty present in film dosimetry which results from moderate curvature of films during the scanning process. Small film samples are particularly susceptible to film curling which may be undetected or deemed insignificant. In this study, we consider test cases with controlled induced curvature of film and with film raised horizontally above the scanner plate. We also evaluate the difference in scans of a film irradiated with a typical brachytherapy dose distribution with the film naturally curved and with the film held flat on the scanner. Typical naturally occurring curvature of film at scanning, giving rise to a maximum height 1 to 2 mm above the scan plane, may introduce dose errors of 1% to 4%, and considerably reduce gamma evaluation passing rates when comparing film-measured doses with treatment planning system-calculated dose distributions, a common application of film dosimetry in radiotherapy. The use of a triple-channel dosimetry algorithm appeared to mitigate the error due to film curvature compared to conventional single-channel film dosimetry. The change in pixel value and calibrated reported dose with film curling or height above the scanner plate may be due to variations in illumination characteristics, optical disturbances, or a Callier-type effect. There is a clear requirement for physically flat films at scanning to avoid the introduction of a substantial error source in film dosimetry. Particularly for small film samples, a compression glass plate above the film is recommended to ensure flat-film scanning. This effect has been overlooked to date in the literature.

  7. Evaluation of three lidar scanning strategies for turbulence measurements

    DOE PAGES

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; ...

    2016-05-03

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less

  8. Evaluation of three lidar scanning strategies for turbulence measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less

  9. An assessment of envelope-based demodulation in case of proximity of carrier and modulation frequencies

    NASA Astrophysics Data System (ADS)

    Shahriar, Md Rifat; Borghesani, Pietro; Randall, R. B.; Tan, Andy C. C.

    2017-11-01

    Demodulation is a necessary step in the field of diagnostics to reveal faults whose signatures appear as an amplitude and/or frequency modulation. The Hilbert transform has conventionally been used for the calculation of the analytic signal required in the demodulation process. However, the carrier and modulation frequencies must meet the conditions set by the Bedrosian identity for the Hilbert transform to be applicable for demodulation. This condition, basically requiring the carrier frequency to be sufficiently higher than the frequency of the modulation harmonics, is usually satisfied in many traditional diagnostic applications (e.g. vibration analysis of gear and bearing faults) due to the order-of-magnitude ratio between the carrier and modulation frequency. However, the diversification of the diagnostic approaches and applications shows cases (e.g. electrical signature analysis-based diagnostics) where the carrier frequency is in close proximity to the modulation frequency, thus challenging the applicability of the Bedrosian theorem. This work presents an analytic study to quantify the error introduced by the Hilbert transform-based demodulation when the Bedrosian identity is not satisfied and proposes a mitigation strategy to combat the error. An experimental study is also carried out to verify the analytical results. The outcome of the error analysis sets a confidence limit on the estimated modulation (both shape and magnitude) achieved through the Hilbert transform-based demodulation in case of violated Bedrosian theorem. However, the proposed mitigation strategy is found effective in combating the demodulation error aroused in this scenario, thus extending applicability of the Hilbert transform-based demodulation.

  10. Modeling and Control of a Tailsitter with a Ducted Fan

    NASA Astrophysics Data System (ADS)

    Argyle, Matthew Elliott

    There are two traditional aircraft categories: fixed-wing which have a long endurance and a high cruise airspeed and rotorcraft which can take-off and land vertically. The tailsitter is a type of aircraft that has the strengths of both platforms, with no additional mechanical complexity, because it takes off and lands vertically on its tail and can transition the entire aircraft horizontally into high-speed flight. In this dissertation, we develop the entire control system for a tailsitter with a ducted fan. The standard method to compute the quaternion-based attitude error does not generate ideal trajectories for a hovering tailsitter for some situations. In addition, the only approach in the literature to mitigate this breaks down for large attitude errors. We develop an alternative quaternion-based error method which generates better trajectories than the standard approach and can handle large errors. We also derive a hybrid backstepping controller with almost global asymptotic stability based on this error method. Many common altitude and airspeed control schemes for a fixed-wing airplane assume that the altitude and airspeed dynamics are decoupled which leads to errors. The Total Energy Control System (TECS) is an approach that controls the altitude and airspeed by manipulating the total energy rate and energy distribution rate, of the aircraft, in a manner which accounts for the dynamic coupling. In this dissertation, a nonlinear controller, which can handle inaccurate thrust and drag models, based on the TECS principles is derived. Simulation results show that the nonlinear controller has better performance than the standard PI TECS control schemes. Most constant altitude transitions are accomplished by generating an optimal trajectory, and potentially actuator inputs, based on a high fidelity model of the aircraft. While there are several approaches to mitigate the effects of modeling errors, these do not fully remove the accurate model requirement. In this dissertation, we develop two different approaches that can achieve near constant altitude transitions for some types of aircraft. The first method, based on multiple LQR controllers, requires a high fidelity model of the aircraft. However, the second method, based on the energy along the body axes, requires almost no aerodynamic information.

  11. Mitigation potential of soil carbon management overestimated by neglecting N2O emissions

    NASA Astrophysics Data System (ADS)

    Lugato, Emanuele; Leip, Adrian; Jones, Arwyn

    2018-03-01

    International initiatives such as the `4 per 1000' are promoting enhanced carbon (C) sequestration in agricultural soils as a way to mitigate greenhouse gas emissions1. However, changes in soil organic C turnover feed back into the nitrogen (N) cycle2, meaning that variation in soil nitrous oxide (N2O) emissions may offset or enhance C sequestration actions3. Here we use a biogeochemistry model on approximately 8,000 soil sampling locations in the European Union4 to quantify the net CO2 equivalent (CO2e) fluxes associated with representative C-mitigating agricultural practices. Practices based on integrated crop residue retention and lower soil disturbance are found to not increase N2O emissions as long as C accumulation continues (until around 2040), thereafter leading to a moderate C sequestration offset mostly below 47% by 2100. The introduction of N-fixing cover crops allowed higher C accumulation over the initial 20 years, but this gain was progressively offset by higher N2O emissions over time. By 2060, around half of the sites became a net source of greenhouse gases. We conclude that significant CO2 mitigation can be achieved in the initial 20-30 years of any C management scheme, but after that N inputs should be controlled through appropriate management.

  12. Predicting areas of sustainable error growth in quasigeostrophic flows using perturbation alignment properties

    NASA Astrophysics Data System (ADS)

    Rivière, G.; Hua, B. L.

    2004-10-01

    A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.

  13. Measures of model performance based on the log accuracy ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  14. Measures of model performance based on the log accuracy ratio

    DOE PAGES

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    2018-01-03

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  15. Identifying the causes of road crashes in Europe

    PubMed Central

    Thomas, Pete; Morris, Andrew; Talbot, Rachel; Fagerlind, Helen

    2013-01-01

    This research applies a recently developed model of accident causation, developed to investigate industrial accidents, to a specially gathered sample of 997 crashes investigated in-depth in 6 countries. Based on the work of Hollnagel the model considers a collision to be a consequence of a breakdown in the interaction between road users, vehicles and the organisation of the traffic environment. 54% of road users experienced interpretation errors while 44% made observation errors and 37% planning errors. In contrast to other studies only 11% of drivers were identified as distracted and 8% inattentive. There was remarkably little variation in these errors between the main road user types. The application of the model to future in-depth crash studies offers the opportunity to identify new measures to improve safety and to mitigate the social impact of collisions. Examples given include the potential value of co-driver advisory technologies to reduce observation errors and predictive technologies to avoid conflicting interactions between road users. PMID:24406942

  16. Investigating an API for resilient exascale computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stearley, Jon R.; Tomkins, James; VanDyke, John P.

    2013-05-01

    Increased HPC capability comes with increased complexity, part counts, and fault occurrences. In- creasing the resilience of systems and applications to faults is a critical requirement facing the viability of exascale systems, as the overhead of traditional checkpoint/restart is projected to outweigh its bene ts due to fault rates outpacing I/O bandwidths. As faults occur and propagate throughout hardware and software layers, pervasive noti cation and handling mechanisms are necessary. This report describes an initial investigation of fault types and programming interfaces to mitigate them. Proof-of-concept APIs are presented for the frequent and important cases of memory errors and nodemore » failures, and a strategy proposed for lesystem failures. These involve changes to the operating system, runtime, I/O library, and application layers. While a single API for fault handling among hardware and OS and application system-wide remains elusive, the e ort increased our understanding of both the mountainous challenges and the promising trailheads. 3« less

  17. Thinking forensics: Cognitive science for forensic practitioners.

    PubMed

    Edmond, Gary; Towler, Alice; Growns, Bethany; Ribeiro, Gianni; Found, Bryan; White, David; Ballantyne, Kaye; Searston, Rachel A; Thompson, Matthew B; Tangen, Jason M; Kemp, Richard I; Martire, Kristy

    2017-03-01

    Human factors and their implications for forensic science have attracted increasing levels of interest across criminal justice communities in recent years. Initial interest centred on cognitive biases, but has since expanded such that knowledge from psychology and cognitive science is slowly infiltrating forensic practices more broadly. This article highlights a series of important findings and insights of relevance to forensic practitioners. These include research on human perception, memory, context information, expertise, decision-making, communication, experience, verification, confidence, and feedback. The aim of this article is to sensitise forensic practitioners (and lawyers and judges) to a range of potentially significant issues, and encourage them to engage with research in these domains so that they may adapt procedures to improve performance, mitigate risks and reduce errors. Doing so will reduce the divide between forensic practitioners and research scientists as well as improve the value and utility of forensic science evidence. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  18. Error analysis of finite difference schemes applied to hyperbolic initial boundary value problems

    NASA Technical Reports Server (NTRS)

    Skollermo, G.

    1979-01-01

    Finite difference methods for the numerical solution of mixed initial boundary value problems for hyperbolic equations are studied. The reported investigation has the objective to develop a technique for the total error analysis of a finite difference scheme, taking into account initial approximations, boundary conditions, and interior approximation. Attention is given to the Cauchy problem and the initial approximation, the homogeneous problem in an infinite strip with inhomogeneous boundary data, the reflection of errors in the boundaries, and two different boundary approximations for the leapfrog scheme with a fourth order accurate difference operator in space.

  19. Mitigating Communication Delays in Remotely Connected Hardware-in-the-loop Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cale, James; Johnson, Brian; Dall'Anese, Emiliano

    Here, this paper introduces a potential approach for mitigating the effects of communication delays between multiple, closed-loop hardware-in-the-loop experiments which are virtually connected, yet physically separated. The method consists of an analytical method for the compensation of communication delays, along with the supporting computational and communication infrastructure. The control design leverages tools for the design of observers for the compensation of measurement errors in systems with time-varying delays. The proposed methodology is validated through computer simulation and hardware experimentation connecting hardware-in-the-loop experiments conducted between laboratories separated by a distance of over 100 km.

  20. Mitigating Communication Delays in Remotely Connected Hardware-in-the-loop Experiments

    DOE PAGES

    Cale, James; Johnson, Brian; Dall'Anese, Emiliano; ...

    2018-03-30

    Here, this paper introduces a potential approach for mitigating the effects of communication delays between multiple, closed-loop hardware-in-the-loop experiments which are virtually connected, yet physically separated. The method consists of an analytical method for the compensation of communication delays, along with the supporting computational and communication infrastructure. The control design leverages tools for the design of observers for the compensation of measurement errors in systems with time-varying delays. The proposed methodology is validated through computer simulation and hardware experimentation connecting hardware-in-the-loop experiments conducted between laboratories separated by a distance of over 100 km.

  1. Radiation-Tolerant Intelligent Memory Stack - RTIMS

    NASA Technical Reports Server (NTRS)

    Ng, Tak-kwong; Herath, Jeffrey A.

    2011-01-01

    This innovation provides reconfigurable circuitry and 2-Gb of error-corrected or 1-Gb of triple-redundant digital memory in a small package. RTIMS uses circuit stacking of heterogeneous components and radiation shielding technologies. A reprogrammable field-programmable gate array (FPGA), six synchronous dynamic random access memories, linear regulator, and the radiation mitigation circuits are stacked into a module of 42.7 42.7 13 mm. Triple module redundancy, current limiting, configuration scrubbing, and single- event function interrupt detection are employed to mitigate radiation effects. The novel self-scrubbing and single event functional interrupt (SEFI) detection allows a relatively soft FPGA to become radiation tolerant without external scrubbing and monitoring hardware

  2. Improved Calibration through SMAP RFI Change Detection

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey; De Amici, Giovanni; Mohammed, Priscilla; Peng, Jinzheng

    2017-01-01

    Anthropogenic Radio-Frequency Interference (RFI) drove both the SMAP (Soil Moisture Active Passive) microwave radiometer hardware and Level 1 science algorithm designs to use new technology and techniques for the first time on a spaceflight project. Care was taken to provide special features allowing the detection and removal of harmful interference in order to meet the error budget. Nonetheless, the project accepted a risk that RFI and its mitigation would exceed the 1.3-K error budget. Thus, RFI will likely remain a challenge afterwards due to its changing and uncertain nature. To address the challenge, we seek to answer the following questions: How does RFI evolve over the SMAP lifetime? What calibration error does the changing RFI environment cause? Can time series information be exploited to reduce these errors and improve calibration for all science products reliant upon SMAP radiometer data? In this talk, we address the first question.

  3. The Importance of Semi-Major Axis Knowledge in the Determination of Near-Circular Orbits

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Schiesser, Emil R.

    1998-01-01

    Modem orbit determination has mostly been accomplished using Cartesian coordinates. This usage has carried over in recent years to the use of GPS for satellite orbit determination. The unprecedented positioning accuracy of GPS has tended to focus attention more on the system's capability to locate the spacecraft's location at a particular epoch than on its accuracy in determination of the orbit, per se. As is well-known, the latter depends on a coordinated knowledge of position, velocity, and the correlation between their errors. Failure to determine a properly coordinated position/velocity state vector at a given epoch can lead to an epoch state that does not propagate well, and/or may not be usable for the execution of orbit adjustment maneuvers. For the quite common case of near-circular orbits, the degree to which position and velocity estimates are properly coordinated is largely captured by the error in semi-major axis (SMA) they jointly produce. Figure 1 depicts the relationships among radius error, speed error, and their correlation which exist for a typical low altitude Earth orbit. Two familiar consequences are the relationship Figure 1 shows are the following: (1) downrange position error grows at the per orbit rate of 3(pi) times the SMA error; (2) a velocity change imparted to the orbit will have an error of (pi) divided by the orbit period times the SMA error. A less familiar consequence occurs in the problem of initializing the covariance matrix for a sequential orbit determination filter. An initial covariance consistent with orbital dynamics should be used if the covariance is to propagate well. Properly accounting for the SMA error of the initial state in the construction of the initial covariance accomplishes half of this objective, by specifying the partition of the covariance corresponding to down-track position and radial velocity errors. The remainder of the in-plane covariance partition may be specified in terms of the flight path angle error of the initial state. Figure 2 illustrates the effect of properly and not properly initializing a covariance. This figure was produced by propagating the covariance shown on the plot, without process noise, in a circular low Earth orbit whose period is 5828.5 seconds. The upper subplot, in which the proper relationships among position, velocity, and their correlation has been used, shows overall error growth, in terms of the standard deviations of the inertial position coordinates, of about half of the lower subplot, whose initial covariance was based on other considerations.

  4. Global and local concerns: what attitudes and beliefs motivate farmers to mitigate and adapt to climate change?

    PubMed

    Haden, Van R; Niles, Meredith T; Lubell, Mark; Perlman, Joshua; Jackson, Louise E

    2012-01-01

    In response to agriculture's vulnerability and contribution to climate change, many governments are developing initiatives that promote the adoption of mitigation and adaptation practices among farmers. Since most climate policies affecting agriculture rely on voluntary efforts by individual farmers, success requires a sound understanding of the factors that motivate farmers to change practices. Recent evidence suggests that past experience with the effects of climate change and the psychological distance associated with people's concern for global and local impacts can influence environmental behavior. Here we surveyed farmers in a representative rural county in California's Central Valley to examine how their intention to adopt mitigation and adaptation practices is influenced by previous climate experiences and their global and local concerns about climate change. Perceived changes in water availability had significant effects on farmers' intention to adopt mitigation and adaptation strategies, which were mediated through global and local concerns respectively. This suggests that mitigation is largely motivated by psychologically distant concerns and beliefs about climate change, while adaptation is driven by psychologically proximate concerns for local impacts. This match between attitudes and behaviors according to the psychological distance at which they are cognitively construed indicates that policy and outreach initiatives may benefit by framing climate impacts and behavioral goals concordantly; either in a global context for mitigation or a local context for adaptation.

  5. Recognizing and managing errors of cognitive underspecification.

    PubMed

    Duthie, Elizabeth A

    2014-03-01

    James Reason describes cognitive underspecification as incomplete communication that creates a knowledge gap. Errors occur when an information mismatch occurs in bridging that gap with a resulting lack of shared mental models during the communication process. There is a paucity of studies in health care examining this cognitive error and the role it plays in patient harm. The goal of the following case analyses is to facilitate accurate recognition, identify how it contributes to patient harm, and suggest appropriate management strategies. Reason's human error theory is applied in case analyses of errors of cognitive underspecification. Sidney Dekker's theory of human incident investigation is applied to event investigation to facilitate identification of this little recognized error. Contributory factors leading to errors of cognitive underspecification include workload demands, interruptions, inexperienced practitioners, and lack of a shared mental model. Detecting errors of cognitive underspecification relies on blame-free listening and timely incident investigation. Strategies for interception include two-way interactive communication, standardization of communication processes, and technological support to ensure timely access to documented clinical information. Although errors of cognitive underspecification arise at the sharp end with the care provider, effective management is dependent upon system redesign that mitigates the latent contributory factors. Cognitive underspecification is ubiquitous whenever communication occurs. Accurate identification is essential if effective system redesign is to occur.

  6. Constrained motion estimation-based error resilient coding for HEVC

    NASA Astrophysics Data System (ADS)

    Guo, Weihan; Zhang, Yongfei; Li, Bo

    2018-04-01

    Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.

  7. Improving patient safety through quality assurance.

    PubMed

    Raab, Stephen S

    2006-05-01

    Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.

  8. Multiple Intravenous Infusions Phase 1b

    PubMed Central

    Cassano-Piché, A; Fan, M; Sabovitch, S; Masino, C; Easty, AC

    2012-01-01

    Background Minimal research has been conducted into the potential patient safety issues related to administering multiple intravenous (IV) infusions to a single patient. Previous research has highlighted that there are a number of related safety risks. In Phase 1a of this study, an analysis of 2 national incident-reporting databases (Institute for Safe Medical Practices Canada and United States Food and Drug Administration MAUDE) found that a high percentage of incidents associated with the administration of multiple IV infusions resulted in patient harm. Objectives The primary objectives of Phase 1b of this study were to identify safety issues with the potential to cause patient harm stemming from the administration of multiple IV infusions; and to identify how nurses are being educated on key principles required to safely administer multiple IV infusions. Data Sources and Review Methods A field study was conducted at 12 hospital clinical units (sites) across Ontario, and telephone interviews were conducted with program coordinators or instructors from both the Ontario baccalaureate nursing degree programs and the Ontario postgraduate Critical Care Nursing Certificate programs. Data were analyzed using Rasmussen’s 1997 Risk Management Framework and a Health Care Failure Modes and Effects Analysis. Results Twenty-two primary patient safety issues were identified with the potential to directly cause patient harm. Seventeen of these (critical issues) were categorized into 6 themes. A cause-consequence tree was established to outline all possible contributing factors for each critical issue. Clinical recommendations were identified for immediate distribution to, and implementation by, Ontario hospitals. Future investigation efforts were planned for Phase 2 of the study. Limitations This exploratory field study identifies the potential for errors, but does not describe the direct observation of such errors, except in a few cases where errors were observed. Not all issues are known in advance, and the frequency of errors is too low to be observed in the time allotted and with the limited sample of observations. Conclusions The administration of multiple IV infusions to a single patient is a complex task with many potential associated patient safety risks. Improvements to infusion and infusion-related technology, education standards, clinical best practice guidelines, hospital policies, and unit work practices are required to reduce the risk potential. This report makes several recommendations to Ontario hospitals so that they can develop an awareness of the issues highlighted in this report and minimize some of the risks. Further investigation of mitigating strategies is required and will be undertaken in Phase 2 of this research. Plain Language Summary Patients, particularly in critical care environments, often require multiple intravenous (IV) medications via large volumetric or syringe infusion pumps. The infusion of multiple IV medications is not without risk; unintended errors during these complex procedures have resulted in patient harm. However, the range of associated risks and the factors contributing to these risks are not well understood. Health Quality Ontario’s Ontario Health Technology Advisory Committee commissioned the Health Technology Safety Research Team at the University Health Network to conduct a multi-phase study to identify and mitigate the risks associated with multiple IV infusions. Some of the questions addressed by the team were as follows: What is needed to reduce the risk of errors for individuals who are receiving a lot of medications? What strategies work best? The initial report, Multiple Intravenous Infusions Phase 1a: Situation Scan Summary Report, summarizes the interim findings based on a literature review, an incident database review, and a technology scan. The Health Technology Safety Research Team worked in close collaboration with the Institute for Safe Medication Practices Canada on an exploratory study to understand the risks associated with multiple IV infusions and the degree to which nurses are educated to help mitigate them. The current report, Multiple Intravenous Infusions Phase 1b: Practice and Training Scan, presents the findings of a field study of 12 hospital clinical units across Ontario, as well as 13 interviews with educators from baccalaureate-level nursing degree programs and postgraduate Critical Care Nursing Certificate programs. It makes 9 recommendations that emphasize best practices for the administration of multiple IV infusions and pertain to secondary infusions, line identification, line set-up and removal, and administering IV bolus medications. The Health Technology Safety Research Team has also produced an associated report for hospitals entitled Mitigating the Risks Associated With Multiple IV Infusions: Recommendations Based on a Field Study of Twelve Ontario Hospitals, which highlights the 9 interim recommendations and provides a brief rationale for each one. PMID:23074426

  9. Recovery of Large Angular Scale CMB Polarization for Instruments Employing Variable-Delay Polarization Modulators

    NASA Technical Reports Server (NTRS)

    Miller, N. J.; Chuss, D. T.; Marriage, T. A.; Wollack, E. J.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Fixsen, D. J.; Harrington, K.; hide

    2016-01-01

    Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/ f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r= 0.01. Indeed, r less than 0.01 is achievable with commensurately improved characterizations and controls.

  10. Radiation Tolerant Intelligent Memory Stack (RTIMS)

    NASA Technical Reports Server (NTRS)

    Ng, Tak-kwong; Herath, Jeffrey A.

    2006-01-01

    The Radiation Tolerant Intelligent Memory Stack (RTIMS), suitable for both geostationary and low earth orbit missions, has been developed. The memory module is fully functional and undergoing environmental and radiation characterization. A self-contained flight-like module is expected to be completed in 2006. RTIMS provides reconfigurable circuitry and 2 gigabits of error corrected or 1 gigabit of triple redundant digital memory in a small package. RTIMS utilizes circuit stacking of heterogeneous components and radiation shielding technologies. A reprogrammable field programmable gate array (FPGA), six synchronous dynamic random access memories, linear regulator, and the radiation mitigation circuitries are stacked into a module of 42.7mm x 42.7mm x 13.00mm. Triple module redundancy, current limiting, configuration scrubbing, and single event function interrupt detection are employed to mitigate radiation effects. The mitigation techniques significantly simplify system design. RTIMS is well suited for deployment in real-time data processing, reconfigurable computing, and memory intensive applications.

  11. 14 CFR 33.201 - Design and test requirements for Early ETOPS eligibility.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... maintenance errors that could result in an IFSD, loss of thrust control, or other power loss. (b) The design... power loss in the applicant's other relevant type designs approved within the past 10 years, to the... service data must show experience with and knowledge of problem mitigating design practices equivalent to...

  12. Post-Modeling Histogram Matching of Maps Produced Using Regression Trees

    Treesearch

    Andrew J. Lister; Tonya W. Lister

    2006-01-01

    Spatial predictive models often use statistical techniques that in some way rely on averaging of values. Estimates from linear modeling are known to be susceptible to truncation of variance when the independent (predictor) variables are measured with error. A straightforward post-processing technique (histogram matching) for attempting to mitigate this effect is...

  13. Feedforward Equalizers for MDM-WDM in Multimode Fiber Interconnects

    NASA Astrophysics Data System (ADS)

    Masunda, Tendai; Amphawan, Angela

    2018-04-01

    In this paper, we present new tap configurations of a feedforward equalizer to mitigate mode coupling in a 60-Gbps 18-channel mode-wavelength division multiplexing system in a 2.5-km-long multimode fiber. The performance of the equalization is measured through analyses on eye diagrams, power coupling coefficients and bit-error rates.

  14. Astronaut Biography Project for Countermeasures of Human Behavior and Performance Risks in Long Duration Space Flights

    NASA Technical Reports Server (NTRS)

    Banks, Akeem

    2012-01-01

    This final report will summarize research that relates to human behavioral health and performance of astronauts and flight controllers. Literature reviews, data archival analyses, and ground-based analog studies that center around the risk of human space flight are being used to help mitigate human behavior and performance risks from long duration space flights. A qualitative analysis of an astronaut autobiography was completed. An analysis was also conducted on exercise countermeasure publications to show the positive affects of exercise on the risks targeted in this study. The three main risks targeted in this study are risks of behavioral and psychiatric disorders, risks of performance errors due to poor team performance, cohesion, and composition, and risks of performance errors due to sleep deprivation, circadian rhythm. These three risks focus on psychological and physiological aspects of astronauts who venture out into space on long duration space missions. The purpose of this research is to target these risks in order to help quantify, identify, and mature countermeasures and technologies required in preventing or mitigating adverse outcomes from exposure to the spaceflight environment

  15. Mitigation of X-ray damage in macromolecular crystallography by submicrometre line focusing.

    PubMed

    Finfrock, Y Zou; Stern, Edward A; Alkire, R W; Kas, Joshua J; Evans-Lutterodt, Kenneth; Stein, Aaron; Duke, Norma; Lazarski, Krzysztof; Joachimiak, Andrzej

    2013-08-01

    Reported here are measurements of the penetration depth and spatial distribution of photoelectron (PE) damage excited by 18.6 keV X-ray photons in a lysozyme crystal with a vertical submicrometre line-focus beam of 0.7 µm full-width half-maximum (FWHM). The experimental results determined that the penetration depth of PEs is 5 ± 0.5 µm with a monotonically decreasing spatial distribution shape, resulting in mitigation of diffraction signal damage. This does not agree with previous theoretical predication that the mitigation of damage requires a peak of damage outside the focus. A new improved calculation provides some qualitative agreement with the experimental results, but significant errors still remain. The mitigation of radiation damage by line focusing was measured experimentally by comparing the damage in the X-ray-irradiated regions of the submicrometre focus with the large-beam case under conditions of equal exposure and equal volumes of the protein crystal, and a mitigation factor of 4.4 ± 0.4 was determined. The mitigation of radiation damage is caused by spatial separation of the dominant PE radiation-damage component from the crystal region of the line-focus beam that contributes the diffraction signal. The diffraction signal is generated by coherent scattering of incident X-rays (which introduces no damage), while the overwhelming proportion of damage is caused by PE emission as X-ray photons are absorbed.

  16. Notice of Violation of IEEE Publication PrinciplesJoint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath

    NASA Astrophysics Data System (ADS)

    Li, Lei; Hu, Jianhao

    2010-12-01

    Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.

  17. The price of complexity in financial networks

    NASA Astrophysics Data System (ADS)

    Battiston, Stefano; Caldarelli, Guido; May, Robert M.; Roukny, Tarik; Stiglitz, Joseph E.

    2016-09-01

    Financial institutions form multilayer networks by engaging in contracts with each other and by holding exposures to common assets. As a result, the default probability of one institution depends on the default probability of all of the other institutions in the network. Here, we show how small errors on the knowledge of the network of contracts can lead to large errors in the probability of systemic defaults. From the point of view of financial regulators, our findings show that the complexity of financial networks may decrease the ability to mitigate systemic risk, and thus it may increase the social cost of financial crises.

  18. The price of complexity in financial networks.

    PubMed

    Battiston, Stefano; Caldarelli, Guido; May, Robert M; Roukny, Tarik; Stiglitz, Joseph E

    2016-09-06

    Financial institutions form multilayer networks by engaging in contracts with each other and by holding exposures to common assets. As a result, the default probability of one institution depends on the default probability of all of the other institutions in the network. Here, we show how small errors on the knowledge of the network of contracts can lead to large errors in the probability of systemic defaults. From the point of view of financial regulators, our findings show that the complexity of financial networks may decrease the ability to mitigate systemic risk, and thus it may increase the social cost of financial crises.

  19. A predictability study of Lorenz's 28-variable model as a dynamical system

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  20. Experimental investigation of observation error in anuran call surveys

    USGS Publications Warehouse

    McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.

    2010-01-01

    Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.

  1. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  2. Impact of Standardized Communication Techniques on Errors during Simulated Neonatal Resuscitation.

    PubMed

    Yamada, Nicole K; Fuerch, Janene H; Halamek, Louis P

    2016-03-01

    Current patterns of communication in high-risk clinical situations, such as resuscitation, are imprecise and prone to error. We hypothesized that the use of standardized communication techniques would decrease the errors committed by resuscitation teams during neonatal resuscitation. In a prospective, single-blinded, matched pairs design with block randomization, 13 subjects performed as a lead resuscitator in two simulated complex neonatal resuscitations. Two nurses assisted each subject during the simulated resuscitation scenarios. In one scenario, the nurses used nonstandard communication; in the other, they used standardized communication techniques. The performance of the subjects was scored to determine errors committed (defined relative to the Neonatal Resuscitation Program algorithm), time to initiation of positive pressure ventilation (PPV), and time to initiation of chest compressions (CC). In scenarios in which subjects were exposed to standardized communication techniques, there was a trend toward decreased error rate, time to initiation of PPV, and time to initiation of CC. While not statistically significant, there was a 1.7-second improvement in time to initiation of PPV and a 7.9-second improvement in time to initiation of CC. Should these improvements in human performance be replicated in the care of real newborn infants, they could improve patient outcomes and enhance patient safety. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  3. Experienced physicians benefit from analyzing initial diagnostic hypotheses

    PubMed Central

    Bass, Adam; Geddes, Colin; Wright, Bruce; Coderre, Sylvain; Rikers, Remy; McLaughlin, Kevin

    2013-01-01

    Background Most incorrect diagnoses involve at least one cognitive error, of which premature closure is the most prevalent. While metacognitive strategies can mitigate premature closure in inexperienced learners, these are rarely studied in experienced physicians. Our objective here was to evaluate the effect of analytic information processing on diagnostic performance of nephrologists and nephrology residents. Methods We asked nine nephrologists and six nephrology residents at the University of Calgary and Glasgow University to diagnose ten nephrology cases. We provided presenting features along with contextual information, after which we asked for an initial diagnosis. We then primed participants to use either hypothetico-deductive reasoning or scheme-inductive reasoning to analyze the remaining case data and generate a final diagnosis. Results After analyzing initial hypotheses, both nephrologists and residents improved the accuracy of final diagnoses (31.1% vs. 65.6%, p < 0.001, and 40.0% vs. 70.0%, p < 0.001, respectively). We found a significant interaction between experience and analytic processing strategy (p = 0.02): nephrology residents had significantly increased odds of diagnostic success when using scheme-inductive reasoning (odds ratio [95% confidence interval] 5.69 [1.59, 20.33], p = 0.07), whereas the performance of experienced nephrologists did not differ between strategies (odds ratio 0.57 [0.23, 1.39], p = 0.20). Discussion Experienced nephrologists and nephrology residents can improve their performance by analyzing initial diagnostic hypotheses. The explanation of the interaction between experience and the effect of different reasoning strategies is unclear, but may relate to preferences in reasoning strategy, or the changes in knowledge structure with experience. PMID:26451203

  4. C-Band Airport Surface Communications System Engineering-Initial High-Level Safety Risk Assessment and Mitigation

    NASA Technical Reports Server (NTRS)

    Zelkin, Natalie; Henriksen, Stephen

    2011-01-01

    This document is being provided as part of ITT's NASA Glenn Research Center Aerospace Communication Systems Technical Support (ACSTS) contract: "New ATM Requirements--Future Communications, C-Band and L-Band Communications Standard Development." ITT has completed a safety hazard analysis providing a preliminary safety assessment for the proposed C-band (5091- to 5150-MHz) airport surface communication system. The assessment was performed following the guidelines outlined in the Federal Aviation Administration Safety Risk Management Guidance for System Acquisitions document. The safety analysis did not identify any hazards with an unacceptable risk, though a number of hazards with a medium risk were documented. This effort represents an initial high-level safety hazard analysis and notes the triggers for risk reassessment. A detailed safety hazards analysis is recommended as a follow-on activity to assess particular components of the C-band communication system after the profile is finalized and system rollout timing is determined. A security risk assessment has been performed by NASA as a parallel activity. While safety analysis is concerned with a prevention of accidental errors and failures, the security threat analysis focuses on deliberate attacks. Both processes identify the events that affect operation of the system; and from a safety perspective the security threats may present safety risks.

  5. Automation bias: a systematic review of frequency, effect mediators, and mitigators

    PubMed Central

    Roudsari, Abdul; Wyatt, Jeremy C

    2011-01-01

    Automation bias (AB)—the tendency to over-rely on automation—has been studied in various academic fields. Clinical decision support systems (CDSS) aim to benefit the clinical decision-making process. Although most research shows overall improved performance with use, there is often a failure to recognize the new errors that CDSS can introduce. With a focus on healthcare, a systematic review of the literature from a variety of research fields has been carried out, assessing the frequency and severity of AB, the effect mediators, and interventions potentially mitigating this effect. This is discussed alongside automation-induced complacency, or insufficient monitoring of automation output. A mix of subject specific and freetext terms around the themes of automation, human–automation interaction, and task performance and error were used to search article databases. Of 13 821 retrieved papers, 74 met the inclusion criteria. User factors such as cognitive style, decision support systems (DSS), and task specific experience mediated AB, as did attitudinal driving factors such as trust and confidence. Environmental mediators included workload, task complexity, and time constraint, which pressurized cognitive resources. Mitigators of AB included implementation factors such as training and emphasizing user accountability, and DSS design factors such as the position of advice on the screen, updated confidence levels attached to DSS output, and the provision of information versus recommendation. By uncovering the mechanisms by which AB operates, this review aims to help optimize the clinical decision-making process for CDSS developers and healthcare practitioners. PMID:21685142

  6. Impulsive interference in communication channels and its mitigation by SPART and other nonlinear filters

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexei V.; Epard, Marc; Lancaster, John B.; Lutes, Robert L.; Shumaker, Eric A.

    2012-12-01

    A strong digital communication transmitter in close physical proximity to a receiver of a weak signal can noticeably interfere with the latter even when the respective channels are tens or hundreds of megahertz apart. When time domain observations are made in the signal chain of the receiver between the first mixer and the baseband, this interference is likely to appear impulsive. The impulsive nature of this interference provides an opportunity to reduce its power by nonlinear filtering, improving the quality of the receiver channel. This article describes the mitigation, by a particular nonlinear filter, of the impulsive out-of-band (OOB) interference induced in High Speed Downlink Packet Access (HSDPA) by WiFi transmissions, protocols which coexist in many 3G smartphones and mobile hotspots. Our measurements show a decrease in the maximum error-free bit rate of a 1.95 GHz HSDPA receiver caused by the impulsive interference from an OOB 2.4 GHz WiFi transmission, sometimes down to a small fraction of the rate observed in the absence of the interference. We apply a nonlinear SPART filter to recover a noticeable portion of the lost rate and maintain an error-free connection under much higher levels of the WiFi interference than a receiver that does not contain such a filter. These measurements support our wider investigation of OOB interference resulting from digital modulation, which appears impulsive in a receiver, and its mitigation by nonlinear filters.

  7. Automation bias: a systematic review of frequency, effect mediators, and mitigators.

    PubMed

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2012-01-01

    Automation bias (AB)--the tendency to over-rely on automation--has been studied in various academic fields. Clinical decision support systems (CDSS) aim to benefit the clinical decision-making process. Although most research shows overall improved performance with use, there is often a failure to recognize the new errors that CDSS can introduce. With a focus on healthcare, a systematic review of the literature from a variety of research fields has been carried out, assessing the frequency and severity of AB, the effect mediators, and interventions potentially mitigating this effect. This is discussed alongside automation-induced complacency, or insufficient monitoring of automation output. A mix of subject specific and freetext terms around the themes of automation, human-automation interaction, and task performance and error were used to search article databases. Of 13 821 retrieved papers, 74 met the inclusion criteria. User factors such as cognitive style, decision support systems (DSS), and task specific experience mediated AB, as did attitudinal driving factors such as trust and confidence. Environmental mediators included workload, task complexity, and time constraint, which pressurized cognitive resources. Mitigators of AB included implementation factors such as training and emphasizing user accountability, and DSS design factors such as the position of advice on the screen, updated confidence levels attached to DSS output, and the provision of information versus recommendation. By uncovering the mechanisms by which AB operates, this review aims to help optimize the clinical decision-making process for CDSS developers and healthcare practitioners.

  8. Human Error as an Emergent Property of Action Selection and Task Place-Holding.

    PubMed

    Tamborello, Franklin P; Trafton, J Gregory

    2017-05-01

    A computational process model could explain how the dynamic interaction of human cognitive mechanisms produces each of multiple error types. With increasing capability and complexity of technological systems, the potential severity of consequences of human error is magnified. Interruption greatly increases people's error rates, as does the presence of other information to maintain in an active state. The model executed as a software-instantiated Monte Carlo simulation. It drew on theoretical constructs such as associative spreading activation for prospective memory, explicit rehearsal strategies as a deliberate cognitive operation to aid retrospective memory, and decay. The model replicated the 30% effect of interruptions on postcompletion error in Ratwani and Trafton's Stock Trader task, the 45% interaction effect on postcompletion error of working memory capacity and working memory load from Byrne and Bovair's Phaser Task, as well as the 5% perseveration and 3% omission effects of interruption from the UNRAVEL Task. Error classes including perseveration, omission, and postcompletion error fall naturally out of the theory. The model explains post-interruption error in terms of task state representation and priming for recall of subsequent steps. Its performance suggests that task environments providing more cues to current task state will mitigate error caused by interruption. For example, interfaces could provide labeled progress indicators or facilities for operators to quickly write notes about their task states when interrupted.

  9. Irradiation setup at the U-120M cyclotron facility

    NASA Astrophysics Data System (ADS)

    Křížek, F.; Ferencei, J.; Matlocha, T.; Pospíšil, J.; Príbeli, P.; Raskina, V.; Isakov, A.; Štursa, J.; Vaňát, T.; Vysoká, K.

    2018-06-01

    This paper describes parameters of the proton beams provided by the U-120M cyclotron and the related irradiation setup at the open access irradiation facility at the Nuclear Physics Institute of the Czech Academy of Sciences. The facility is suitable for testing radiation hardness of various electronic components. The use of the setup is illustrated by a measurement of an error rate for errors caused by Single Event Transients in an SRAM-based Xilinx XC3S200 FPGA. This measurement provides an estimate of a possible occurrence of Single Event Transients. Data suggest that the variation of error rate of the Single Event Effects for different clock phase shifts is not significant enough to use clock phase alignment with the beam as a fault mitigation technique.

  10. High Reliability Organizations--Medication Safety.

    PubMed

    Yip, Luke; Farmer, Brenna

    2015-06-01

    High reliability organizations (HROs), such as the aviation industry, successfully engage in high-risk endeavors and have low incidence of adverse events. HROs have a preoccupation with failure and errors. They analyze each event to effect system wide change in an attempt to mitigate the occurrence of similar errors. The healthcare industry can adapt HRO practices, specifically with regard to teamwork and communication. Crew resource management concepts can be adapted to healthcare with the use of certain tools such as checklists and the sterile cockpit to reduce medication errors. HROs also use The Swiss Cheese Model to evaluate risk and look for vulnerabilities in multiple protective barriers, instead of focusing on one failure. This model can be used in medication safety to evaluate medication management in addition to using the teamwork and communication tools of HROs.

  11. Error reduction in EMG signal decomposition

    PubMed Central

    Kline, Joshua C.

    2014-01-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159

  12. A REVIEW OF RADON MITIGATION IN LARGE BUILDINGS IN THE US

    EPA Science Inventory

    The Environmental Protection Agency of the US carried out its initial research on radon mitigation in houses, both existing and new. A review of this work is presented in another paper at this workshop. Four years ago, this work was expanded to include the study of radon in schoo...

  13. University Leadership in Island Climate Change Mitigation

    ERIC Educational Resources Information Center

    Coffman, Makena

    2009-01-01

    Purpose: The purpose of this paper is to present the University of Hawaii at Manoa's (UHM's) initiatives in achieving greenhouse gas (GHG) emissions reductions on campus and at the state level. Design/methodology/approach: UHM has taken a "lead by example" approach to climate change mitigation in terms of working to meet the American…

  14. Examining the sources of public support for wildland fire policies

    Treesearch

    J.D. Absher; J.J. Vaske

    2007-01-01

    Recent severe wildfires have reinforced the need for successful mitigation strategies to be coordinated across all levels of government that address the needs and concerns of homeowners who live in the wildland/urban interface (WUI). Despite the growing body of social science literature on agency-initiated wildland fire policies and homeowner mitigation strategies,...

  15. Comparison of national space debris mitigation standards

    NASA Astrophysics Data System (ADS)

    Kato, A.

    2001-01-01

    Several national organizations of the space faring nations have established Space Debris Mitigation Standards or Handbooks to promote efforts to deal with the space debris issue. This paper introduces the characteristics of each document and compares the structure, items and level of requirements. The contents of these standards may be slightly different from each other but the fundamental principles are almost the same; they are (1) prevention of on-orbit breakups, (2) removal of mission terminated spacecraft from the useful orbit regions, and (3) limiting the objects released during normal operations. The Inter-Agency Space Debris Coordination Committee has contributed considerably to this trend. The Committee also found out by its recent survey that some commercial companies have begun to adopt the debris mitigation measures for their projects. However, the number of organizations that have initiated this kind of self-control is still limited, so the next challenge of the Committee is to promote the Space Debris Mitigation Guidelines world-wide. IADC initiated this project in October 1999 and a draft is being circulated among the member agencies.

  16. Effects of ocean initial perturbation on developing phase of ENSO in a coupled seasonal prediction model

    NASA Astrophysics Data System (ADS)

    Lee, Hyun-Chul; Kumar, Arun; Wang, Wanqiu

    2018-03-01

    Coupled prediction systems for seasonal and inter-annual variability in the tropical Pacific are initialized from ocean analyses. In ocean initial states, small scale perturbations are inevitably smoothed or distorted by the observational limits and data assimilation procedures, which tends to induce potential ocean initial errors for the El Nino-Southern Oscillation (ENSO) prediction. Here, the evolution and effects of ocean initial errors from the small scale perturbation on the developing phase of ENSO are investigated by an ensemble of coupled model predictions. Results show that the ocean initial errors at the thermocline in the western tropical Pacific grow rapidly to project on the first mode of equatorial Kelvin wave and propagate to the east along the thermocline. In boreal spring when the surface buoyancy flux weakens in the eastern tropical Pacific, the subsurface errors influence sea surface temperature variability and would account for the seasonal dependence of prediction skill in the NINO3 region. It is concluded that the ENSO prediction in the eastern tropical Pacific after boreal spring can be improved by increasing the observational accuracy of subsurface ocean initial states in the western tropical Pacific.

  17. Evaluation and mitigation of potential errors in radiochromic film dosimetry due to film curvature at scanning

    PubMed Central

    Bradley, David A.; Nisbet, Andrew

    2015-01-01

    This work considers a previously overlooked uncertainty present in film dosimetry which results from moderate curvature of films during the scanning process. Small film samples are particularly susceptible to film curling which may be undetected or deemed insignificant. In this study, we consider test cases with controlled induced curvature of film and with film raised horizontally above the scanner plate. We also evaluate the difference in scans of a film irradiated with a typical brachytherapy dose distribution with the film naturally curved and with the film held flat on the scanner. Typical naturally occurring curvature of film at scanning, giving rise to a maximum height 1 to 2 mm above the scan plane, may introduce dose errors of 1% to 4%, and considerably reduce gamma evaluation passing rates when comparing film‐measured doses with treatment planning system‐calculated dose distributions, a common application of film dosimetry in radiotherapy. The use of a triple‐channel dosimetry algorithm appeared to mitigate the error due to film curvature compared to conventional single‐channel film dosimetry. The change in pixel value and calibrated reported dose with film curling or height above the scanner plate may be due to variations in illumination characteristics, optical disturbances, or a Callier‐type effect. There is a clear requirement for physically flat films at scanning to avoid the introduction of a substantial error source in film dosimetry. Particularly for small film samples, a compression glass plate above the film is recommended to ensure flat‐film scanning. This effect has been overlooked to date in the literature. PACS numbers: 87.55.Qr, 87.56.bg, 87.55.km PMID:26103181

  18. Evaluation of potential severe accidents during low power and shutdown operations at Surry, Unit 1: Analysis of core damage frequency from internal events during mid-loop operations, Appendices E (Sections E.1--E.8). Volume 2, Part 3A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, T.L.; Musicki, Z.; Kohut, P.

    1994-06-01

    During 1989, the Nuclear Regulatory Commission (NRC) initiated an extensive program to carefully examine the potential risks during low power and shutdown operations. The program includes two parallel projects being performed by Brookhaven National Laboratory (BNL) and Sandia National Laboratories (SNL). Two plants, Surry (pressurized water reactor) and Grand Gulf (boiling water reactor), were selected as the plants to be studied. The objectives of the program are to assess the risks of severe accidents initiated during plant operational states other than full power operation and to compare the estimated core damage frequencies, important accident sequences and other qualitative and quantitativemore » results with those accidents initiated during full power operation as assessed in NUREG-1150. The objective of this report is to document the approach utilized in the Surry plant and discuss the results obtained. A parallel report for the Grand Gulf plant is prepared by SNL. This study shows that the core-damage frequency during mid-loop operation at the Surry plant is comparable to that of power operation. The authors recognize that there is very large uncertainty in the human error probabilities in this study. This study identified that only a few procedures are available for mitigating accidents that may occur during shutdown. Procedures written specifically for shutdown accidents would be useful.« less

  19. Transformation through Research-Based Reflection: A Self-Study of Written Feedback Practice

    ERIC Educational Resources Information Center

    Best, Karen

    2011-01-01

    This study investigates the written feedback the author gave during her first year as a university English as a second language writing instructor. The article investigates the form (questions, commands, comments) and the themes (organization, content, grammar) of feedback, the use of mitigation, and the treatment of grammar errors. It shows how…

  20. A comparison of endoscopic localization error rate between operating surgeons and referring endoscopists in colorectal cancer.

    PubMed

    Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A

    2017-03-01

    Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.

  1. Identifying sensitive areas of adaptive observations for prediction of the Kuroshio large meander using a shallow-water model

    NASA Astrophysics Data System (ADS)

    Zou, Guang'an; Wang, Qiang; Mu, Mu

    2016-09-01

    Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.

  2. Software resilience and the effectiveness of software mitigation in microcontrollers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather; Baker, Zachary; Fairbanks, Tom

    Commercially available microprocessors could be useful to the space community for noncritical computations. There are many possible components that are smaller, lower-power, and less expensive than traditional radiation-hardened microprocessors. Many commercial microprocessors have issues with single-event effects (SEEs), such as single-event upsets (SEUs) and single-event transients (SETs), that can cause the microprocessor to calculate an incorrect result or crash. In this paper we present the Trikaya technique for masking SEUs and SETs through software mitigation techniques. Furthermore, test results show that this technique can be very effective at masking errors, making it possible to fly these microprocessors for a varietymore » of missions.« less

  3. Software resilience and the effectiveness of software mitigation in microcontrollers

    DOE PAGES

    Quinn, Heather; Baker, Zachary; Fairbanks, Tom; ...

    2015-12-01

    Commercially available microprocessors could be useful to the space community for noncritical computations. There are many possible components that are smaller, lower-power, and less expensive than traditional radiation-hardened microprocessors. Many commercial microprocessors have issues with single-event effects (SEEs), such as single-event upsets (SEUs) and single-event transients (SETs), that can cause the microprocessor to calculate an incorrect result or crash. In this paper we present the Trikaya technique for masking SEUs and SETs through software mitigation techniques. Furthermore, test results show that this technique can be very effective at masking errors, making it possible to fly these microprocessors for a varietymore » of missions.« less

  4. Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.

    PubMed

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.

  5. Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation

    PubMed Central

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672

  6. Maneuver Planning for Conjunction Risk Mitigation with Ground-track Control Requirements

    NASA Technical Reports Server (NTRS)

    McKinley, David

    2008-01-01

    The planning of conjunction Risk Mitigation Maneuvers (RMM) in the presence of ground-track control requirements is analyzed. Past RMM planning efforts on the Aqua, Aura, and Terra spacecraft have demonstrated that only small maneuvers are available when ground-track control requirements are maintained. Assuming small maneuvers, analytical expressions for the effect of a given maneuver on conjunction geometry are derived. The analytical expressions are used to generate a large trade space for initial RMM design. This trade space represents a significant improvement in initial maneuver planning over existing methods that employ high fidelity maneuver models and propagation.

  7. Thermal error analysis and compensation for digital image/volume correlation

    NASA Astrophysics Data System (ADS)

    Pan, Bing

    2018-02-01

    Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.

  8. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  9. Mitigation options for the industrial sector in Egypt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelil, I.A.; El-Touny, S.; Korkor, H.

    1996-12-31

    Though its contribution to the global Greenhouse gases emission is relatively small, Egypt has signed and ratified the United Nations Framework Convention on Climate Change (UN FCCC) and has been playing an active role in the international efforts to deal with such environmental challenges. Energy efficiency has been one of the main strategies that Egypt has adopted to improve environmental quality and enhance economic competitiveness. This paper highlights three initiatives currently underway to improve energy efficiency of the Egyptian industry. The first is a project that has been recently completed by OECP to assess potential GHG mitigation options available inmore » Egypt`s oil refineries. The second initiative is an assessment of GHG mitigation potential in the Small and Medium size Enterprises (SME) in the Mediterranean city of Alexandria. The third one focuses on identifying demand side management options in some industrial electricity consumers in the same city.« less

  10. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  11. Investigation into solar drying of potato: effect of sample geometry on drying kinetics and CO2 emissions mitigation.

    PubMed

    Tripathy, P P

    2015-03-01

    Drying experiments have been performed with potato cylinders and slices using a laboratory scale designed natural convection mixed-mode solar dryer. The drying data were fitted to eight different mathematical models to predict the drying kinetics, and the validity of these models were evaluated statistically through coefficient of determination (R(2)), root mean square error (RMSE) and reduced chi-square (χ (2)). The present investigation showed that amongst all the mathematical models studied, the Modified Page model was in good agreement with the experimental drying data for both potato cylinders and slices. A mathematical framework has been proposed to estimate the performance of the food dryer in terms of net CO2 emissions mitigation potential along with unit cost of CO2 mitigation arising because of replacement of different fossil fuels by renewable solar energy. For each fossil fuel replaced, the gross annual amount of CO2 as well as net amount of annual CO2 emissions mitigation potential considering CO2 emissions embodied in the manufacture of mixed-mode solar dryer has been estimated. The CO2 mitigation potential and amount of fossil fuels saved while drying potato samples were found to be the maximum for coal followed by light diesel oil and natural gas. It was inferred from the present study that by the year 2020, 23 % of CO2 emissions can be mitigated by the use of mixed-mode solar dryer for drying of agricultural products.

  12. Hungry Horse Dam Fisheries Mitigation; Kokanee Stocking and Monitoring in Flathead Lake, 1995 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredenberg, Wade; Carty, Daniel; Cavigli, Jon

    1996-06-01

    The operation of Hungry Horse Dam on the South Fork-of the Flathead River reduced the reproductive success of kokanee (Oncorhynchus nerka) spawning in the Flathead River. Montana Fish, Wildlife and Parks (MFWP) and the Confederated Salish and Kootenai Tribes (CSKT) authored a mitigation plan to offset those losses. The mitigation goal, stated in the Fisheries Mitigation Plan for Losses Attributed to the Construction and Operation of Hungry Horse Dam, is to: {open_quotes}Replace lost annual production of 100,000 kokanee adults, initially through hatchery production and pen rearing in Flathead Lake, partially replacing lost forage for lake trout (Salvelinus namaycush) in Flatheadmore » Lake.{close_quotes}« less

  13. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response.

    PubMed

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

    2016-04-29

    In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system's response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor's optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section.

  14. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response

    PubMed Central

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

    2016-01-01

    In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system’s response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor’s optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section. PMID:27136562

  15. Alcohol consumption, beverage prices and measurement error.

    PubMed

    Young, Douglas J; Bielinska-Kwapisz, Agnieszka

    2003-03-01

    Alcohol price data collected by the American Chamber of Commerce Researchers Association (ACCRA) have been widely used in studies of alcohol consumption and related behaviors. A number of problems with these data suggest that they contain substantial measurement error, which biases conventional statistical estimators toward a finding of little or no effect of prices on behavior. We test for measurement error, assess the magnitude of the bias and provide an alternative estimator that is likely to be superior. The study utilizes data on per capita alcohol consumption across U.S. states and the years 1982-1997. State and federal alcohol taxes are used as instrumental variables for prices. Formal tests strongly confim the hypothesis of measurement error. Instrumental variable estimates of the price elasticity of demand range from -0.53 to -1.24. These estimates are substantially larger in absolute value than ordinary least squares estimates, which sometimes are not significantly different from zero or even positive. The ACCRA price data are substantially contaminated with measurement error, but using state and federal taxes as instrumental variables mitigates the problem.

  16. Image processing methods to compensate for IFOV errors in microgrid imaging polarimeters

    NASA Astrophysics Data System (ADS)

    Ratliff, Bradley M.; Boger, James K.; Fetrow, Matthew P.; Tyo, J. Scott; Black, Wiley T.

    2006-05-01

    Long-wave infrared imaging Stokes vector polarimeters are used in many remote sensing applications. Imaging polarimeters require that several measurements be made under optically different conditions in order to estimate the polarization signature at a given scene point. This multiple-measurement requirement introduces error in the signature estimates, and the errors differ depending upon the type of measurement scheme used. Here, we investigate a LWIR linear microgrid polarimeter. This type of instrument consists of a mosaic of micropolarizers at different orientations that are masked directly onto a focal plane array sensor. In this scheme, each polarization measurement is acquired spatially and hence each is made at a different point in the scene. This is a significant source of error, as it violates the requirement that each polarization measurement have the same instantaneous field-of-view (IFOV). In this paper, we first study the amount of error introduced by the IFOV handicap in microgrid instruments. We then proceed to investigate means for mitigating the effects of these errors to improve the quality of polarimetric imagery. In particular, we examine different interpolation schemes and gauge their performance. These studies are completed through the use of both real instrumental and modeled data.

  17. Evaluating suggestibility to additive and contradictory misinformation following explicit error detection in younger and older adults.

    PubMed

    Huff, Mark J; Umanath, Sharda

    2018-06-01

    In 2 experiments, we assessed age-related suggestibility to additive and contradictory misinformation (i.e., remembering of false details from an external source). After reading a fictional story, participants answered questions containing misleading details that were either additive (misleading details that supplemented an original event) or contradictory (errors that changed original details). On a final test, suggestibility was greater for additive than contradictory misinformation, and older adults endorsed fewer false contradictory details than younger adults. To mitigate suggestibility in Experiment 2, participants were warned about potential errors, instructed to detect errors, or instructed to detect errors after exposure to examples of additive and contradictory details. Again, suggestibility to additive misinformation was greater than contradictory, and older adults endorsed less contradictory misinformation. Only after detection instructions with misinformation examples were younger adults able to reduce contradictory misinformation effects and reduced these effects to the level of older adults. Additive misinformation however, was immune to all warning and detection instructions. Thus, older adults were less susceptible to contradictory misinformation errors, and younger adults could match this misinformation rate when warning/detection instructions were strong. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Two-dimensional confocal laser scanning microscopy image correlation for nanoparticle flow velocimetry

    NASA Astrophysics Data System (ADS)

    Jun, Brian; Giarra, Matthew; Golz, Brian; Main, Russell; Vlachos, Pavlos

    2016-11-01

    We present a methodology to mitigate the major sources of error associated with two-dimensional confocal laser scanning microscopy (CLSM) images of nanoparticles flowing through a microfluidic channel. The correlation-based velocity measurements from CLSM images are subject to random error due to the Brownian motion of nanometer-sized tracer particles, and a bias error due to the formation of images by raster scanning. Here, we develop a novel ensemble phase correlation with dynamic optimal filter that maximizes the correlation strength, which diminishes the random error. In addition, we introduce an analytical model of CLSM measurement bias error correction due to two-dimensional image scanning of tracer particles. We tested our technique using both synthetic and experimental images of nanoparticles flowing through a microfluidic channel. We observed that our technique reduced the error by up to a factor of ten compared to ensemble standard cross correlation (SCC) for the images tested in the present work. Subsequently, we will assess our framework further, by interrogating nanoscale flow in the cell culture environment (transport within the lacunar-canalicular system) to demonstrate our ability to accurately resolve flow measurements in a biological system.

  19. The price of complexity in financial networks

    PubMed Central

    May, Robert M.; Roukny, Tarik; Stiglitz, Joseph E.

    2016-01-01

    Financial institutions form multilayer networks by engaging in contracts with each other and by holding exposures to common assets. As a result, the default probability of one institution depends on the default probability of all of the other institutions in the network. Here, we show how small errors on the knowledge of the network of contracts can lead to large errors in the probability of systemic defaults. From the point of view of financial regulators, our findings show that the complexity of financial networks may decrease the ability to mitigate systemic risk, and thus it may increase the social cost of financial crises. PMID:27555583

  20. Pulse sequences for suppressing leakage in single-qubit gate operations

    NASA Astrophysics Data System (ADS)

    Ghosh, Joydip; Coppersmith, S. N.; Friesen, Mark

    2017-06-01

    Many realizations of solid-state qubits involve couplings to leakage states lying outside the computational subspace, posing a threat to high-fidelity quantum gate operations. Mitigating leakage errors is especially challenging when the coupling strength is unknown, e.g., when it is caused by noise. Here we show that simple pulse sequences can be used to strongly suppress leakage errors for a qubit embedded in a three-level system. As an example, we apply our scheme to the recently proposed charge quadrupole (CQ) qubit for quantum dots. These results provide a solution to a key challenge for fault-tolerant quantum computing with solid-state elements.

  1. Further evaluation of the constrained least squares electromagnetic compensation method

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1991-01-01

    Technologies exist for construction of antennas with adaptive surfaces that can compensate for many of the larger distortions caused by thermal and gravitational forces. However, as the frequency and size of reflectors increase, the subtle surface errors become significant and degrade the overall electromagnetic performance. Electromagnetic (EM) compensation through an adaptive feed array offers means for mitigation of surface distortion effects. Implementation of EM compensation is investigated with the measured surface errors of the NASA 15 meter hoop/column reflector antenna. Computer simulations are presented for: (1) a hybrid EM compensation technique, and (2) evaluating the performance of a given EM compensation method when implemented with discretized weights.

  2. EUV via hole pattern fidelity enhancement through novel resist and post-litho plasma treatment

    NASA Astrophysics Data System (ADS)

    Yaegashi, Hidetami; Koike, Kyohei; Fonseca, Carlos; Yamashita, Fumiko; Kaushik, Kumar; Morikita, Shinya; Ito, Kiyohito; Yoshimura, Shota; Timoshkov, Vadim; Maslow, Mark; Jee, Tae Kwon; Reijnen, Liesbeth; Choi, Peter; Feng, Mu; Spence, Chris; Schoofs, Stijn

    2018-03-01

    Extreme UV(EUV) technology must be potential solution for sustainable scaling, and its adoption in high volume manufacturing(HVM) is getting realistic more and more. This technology has a wide capability to mitigate various technical problem in Multi-patterning (LELELE) for via hole patterning with 193-i. It induced local pattern fidelity error such like CDU, CER, Pattern placement error. Exactly, EUV must be desirable scaling-driving tool, however, specific technical issue, named RLS (Resolution-LER-Sensitivity) triangle, obvious remaining issue. In this work, we examined hole patterning sensitizing (Lower dose approach) utilizing hole patterning restoration technique named "CD-Healing" as post-Litho. treatment.

  3. RECOVERY OF LARGE ANGULAR SCALE CMB POLARIZATION FOR INSTRUMENTS EMPLOYING VARIABLE-DELAY POLARIZATION MODULATORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, N. J.; Marriage, T. A.; Appel, J. W.

    2016-02-20

    Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residualmore » modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.« less

  4. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  5. Ocean Data Impacts in Global HYCOM

    DTIC Science & Technology

    2014-08-01

    The purpose of assimilation is to reduce the model initial condition error. Improved initial con- ditions should lead to an improved forecast...the determination of locations where forecast errors are sensitive to the initial conditions are essential for improving the data assimilation system...longwave radiation, total (large scale plus convective) precipitation, ground/sea temperature, zonal and me- ridional wind velocities at 10m, mean sea

  6. Influencing Factors of the Initiation Point in the Parachute-Bomb Dynamic Detonation System

    NASA Astrophysics Data System (ADS)

    Qizhong, Li; Ye, Wang; Zhongqi, Wang; Chunhua, Bai

    2017-12-01

    The parachute system has been widely applied in modern armament design, especially for the fuel-air explosives. Because detonation of fuel-air explosives occurs during flight, it is necessary to investigate the influences of the initiation point to ensure successful dynamic detonation. In fact, the initiating position exist the falling area in the fuels, due to the error of influencing factors. In this paper, the major influencing factors of initiation point were explored with airdrop and the regularity between initiation point area and factors were obtained. Based on the regularity, the volume equation of initiation point area was established to predict the range of initiation point in the fuel. The analysis results showed that the initiation point appeared area, scattered on account of the error of attitude angle, secondary initiation charge velocity, and delay time. The attitude angle was the major influencing factors on a horizontal axis. On the contrary, secondary initiation charge velocity and delay time were the major influencing factors on a horizontal axis. Overall, the geometries of initiation point area were sector coupled with the errors of the attitude angle, secondary initiation charge velocity, and delay time.

  7. Evolution of gossip-based indirect reciprocity on a bipartite network

    PubMed Central

    Giardini, Francesca; Vilone, Daniele

    2016-01-01

    Cooperation can be supported by indirect reciprocity via reputation. Thanks to gossip, reputations are built and circulated and humans can identify defectors and ostracise them. However, the evolutionary stability of gossip is allegedly undermined by the fact that it is more error-prone that direct observation, whereas ostracism could be ineffective if the partner selection mechanism is not robust. The aim of this work is to investigate the conditions under which the combination of gossip and ostracism might support cooperation in groups of different sizes. We are also interested in exploring the extent to which errors in transmission might undermine the reliability of gossip as a mechanism for identifying defectors. Our results show that a large quantity of gossip is necessary to support cooperation, and that group structure can mitigate the effects of errors in transmission. PMID:27885256

  8. Evolution of gossip-based indirect reciprocity on a bipartite network.

    PubMed

    Giardini, Francesca; Vilone, Daniele

    2016-11-25

    Cooperation can be supported by indirect reciprocity via reputation. Thanks to gossip, reputations are built and circulated and humans can identify defectors and ostracise them. However, the evolutionary stability of gossip is allegedly undermined by the fact that it is more error-prone that direct observation, whereas ostracism could be ineffective if the partner selection mechanism is not robust. The aim of this work is to investigate the conditions under which the combination of gossip and ostracism might support cooperation in groups of different sizes. We are also interested in exploring the extent to which errors in transmission might undermine the reliability of gossip as a mechanism for identifying defectors. Our results show that a large quantity of gossip is necessary to support cooperation, and that group structure can mitigate the effects of errors in transmission.

  9. Evolution of gossip-based indirect reciprocity on a bipartite network

    NASA Astrophysics Data System (ADS)

    Giardini, Francesca; Vilone, Daniele

    2016-11-01

    Cooperation can be supported by indirect reciprocity via reputation. Thanks to gossip, reputations are built and circulated and humans can identify defectors and ostracise them. However, the evolutionary stability of gossip is allegedly undermined by the fact that it is more error-prone that direct observation, whereas ostracism could be ineffective if the partner selection mechanism is not robust. The aim of this work is to investigate the conditions under which the combination of gossip and ostracism might support cooperation in groups of different sizes. We are also interested in exploring the extent to which errors in transmission might undermine the reliability of gossip as a mechanism for identifying defectors. Our results show that a large quantity of gossip is necessary to support cooperation, and that group structure can mitigate the effects of errors in transmission.

  10. Current pulse: can a production system reduce medical errors in health care?

    PubMed

    Printezis, Antonios; Gopalakrishnan, Mohan

    2007-01-01

    One of the reasons for rising health care costs is medical errors, a majority of which result from faulty systems and processes. Health care in the past has used process-based initiatives such as Total Quality Management, Continuous Quality Improvement, and Six Sigma to reduce errors. These initiatives to redesign health care, reduce errors, and improve overall efficiency and customer satisfaction have had moderate success. Current trend is to apply the successful Toyota Production System (TPS) to health care since its organizing principles have led to tremendous improvement in productivity and quality for Toyota and other businesses that have adapted them. This article presents insights on the effectiveness of TPS principles in health care and the challenges that lie ahead in successfully integrating this approach with other quality initiatives.

  11. Experimental magic state distillation for fault-tolerant quantum computing.

    PubMed

    Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond

    2011-01-25

    Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.

  12. Reducing epistemic errors in water quality modelling through high-frequency data and stakeholder collaboration: the case of an industrial spill

    NASA Astrophysics Data System (ADS)

    Krueger, Tobias; Inman, Alex; Paling, Nick

    2014-05-01

    Catchment management, as driven by legislation such as the EU WFD or grassroots initiatives, requires the apportionment of in-stream pollution to point and diffuse sources so that mitigation measures can be targeted and costs and benefits shared. Source apportionment is typically done via modelling. Given model imperfections and input data errors, it has become state-of-the-art to employ an uncertainty framework. However, what is not easily incorporated in such a framework, and currently much discussed in hydrology, are epistemic uncertainties, i.e. those uncertainties that relate to lack of knowledge about processes and data. For example, what if an otherwise negligible source suddenly matters because of an accidental pollution incident? In this paper we present such a case of epistemic error, an industrial spill ignored in a water quality model, demonstrate the bias of the resulting model simulations, and show how the error was discovered somewhat incidentally by auxiliary high-frequency data and finally corrected through the collective intelligence of a stakeholder network. We suggest that accidental pollution incidents like this are a wide-spread, though largely ignored, problem. Hence our discussion will reflect on the practice of catchment monitoring, modelling and management in general. The case itself occurred as part of ongoing modelling support in the Tamar catchment, one of the priority catchments of the UK government's new approach to managing water resources more decentralised and collaboratively. An Extended Export Coefficient Model (ECM+) had been developed with stakeholders to simulate transfers of nutrients (N & P), sediment and Faecal Coliforms from land to water and down the river network as a function of sewage treatment options, land use, livestock densities and farm management practices. In the process of updating the model for the hydrological years 2008-2012 an over-prediction of the annual average P concentration by the model was found at one sub-catchment outlet compared to high-frequency measurements at this point that had become available through another UK government initiative, the Demonstration Test Catchments. This discrepancy had gone unnoticed when calibrating the model in a probabilistic framework against the statutory monitoring data due to the high uncertainties associated with their low-frequency monitoring regime. According to these data what turned out to be an over-prediction seemed possible, albeit with low probability. It was only through the well-established contacts with the local stakeholders that this anomaly could be connected to an industrial spill elsewhere in the catchment, and the model eventually corrected for this additional source. Failing to account for this source would have resulted in drastic over-estimation of the contributions of other sources, in particular agriculture, and eventually wrong targeting of catchment restoration funds and collateral damage of stakeholder relations. The paper will conclude with a discussion of the following general points: the pretence of uncertainty frameworks in the light of epistemic errors; the value of high-frequency data; the value of stakeholder collaboration, particularly in the light of sharing sensitive information; the (somewhat incidental) synergies of various pieces of information and policy initiatives.

  13. An improved semi-implicit method for structural dynamics analysis

    NASA Technical Reports Server (NTRS)

    Park, K. C.

    1982-01-01

    A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.

  14. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    NASA Astrophysics Data System (ADS)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  15. Mitigation of multipath effect in GNSS short baseline positioning by the multipath hemispherical map

    NASA Astrophysics Data System (ADS)

    Dong, D.; Wang, M.; Chen, W.; Zeng, Z.; Song, L.; Zhang, Q.; Cai, M.; Cheng, Y.; Lv, J.

    2016-03-01

    Multipath is one major error source in high-accuracy GNSS positioning. Various hardware and software approaches are developed to mitigate the multipath effect. Among them the MHM (multipath hemispherical map) and sidereal filtering (SF)/advanced SF (ASF) approaches utilize the spatiotemporal repeatability of multipath effect under static environment, hence they can be implemented to generate multipath correction model for real-time GNSS data processing. We focus on the spatial-temporal repeatability-based MHM and SF/ASF approaches and compare their performances for multipath reduction. Comparisons indicate that both MHM and ASF approaches perform well with residual variance reduction (50 %) for short span (next 5 days) and maintains roughly 45 % reduction level for longer span (next 6-25 days). The ASF model is more suitable for high frequency multipath reduction, such as high-rate GNSS applications. The MHM model is easier to implement for real-time multipath mitigation when the overall multipath regime is medium to low frequency.

  16. Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers

    NASA Astrophysics Data System (ADS)

    Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.

    2018-04-01

    Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.

  17. ADAPTATION AND GENERALIZATION TO OPPOSING PERTURBATIONS IN WALKING

    PubMed Central

    Bhatt, T.; Wang, T.-Y.; Yang, F.; Pai, Y.-C.

    2013-01-01

    Little is known on how the CNS would select its movement options when a person faces a novel or recurring perturbation of two opposing types (slip or trip) while walking. The purposes of this study were (1) to determine whether young adults’ adaptation to repeated slips would interfere with their recovery from a novel trip, and (2) to investigate the generalized strategies after they were exposed to a mixed training with both types of perturbation. Thirty-two young adults were assigned to either the training group, which first underwent repeated-slip training before encountering a novel, unannounced trip while walking, or to the control group, which only experienced the same novel, unannounced trip. The former group would then experience a mix of repeated trips and slips. The results indicated that prior adaptation to slips had only limited interference during the initial phase of trip recovery. In fact, the prior repeated-slip exposure had primed their reaction, which mitigated any error resulting from early interference. As a result, they did not have to take a longer compensatory step for trip recovery than did the controls. After the mixed training, subjects were able to converge effectively the motion state of their center of mass (in its position and velocity space) to a stable and generalized “middle ground” steady-state. Such movement strategies not only further strengthened their robust reactive control of stability, but also reduced the CNS’ overall reliance on accurate context prediction and on feedback correction of perturbation-induced movement error. PMID:23603517

  18. Threat and error management for anesthesiologists: a predictive risk taxonomy

    PubMed Central

    Ruskin, Keith J.; Stiegler, Marjorie P.; Park, Kellie; Guffey, Patrick; Kurup, Viji; Chidester, Thomas

    2015-01-01

    Purpose of review Patient care in the operating room is a dynamic interaction that requires cooperation among team members and reliance upon sophisticated technology. Most human factors research in medicine has been focused on analyzing errors and implementing system-wide changes to prevent them from recurring. We describe a set of techniques that has been used successfully by the aviation industry to analyze errors and adverse events and explain how these techniques can be applied to patient care. Recent findings Threat and error management (TEM) describes adverse events in terms of risks or challenges that are present in an operational environment (threats) and the actions of specific personnel that potentiate or exacerbate those threats (errors). TEM is a technique widely used in aviation, and can be adapted for the use in a medical setting to predict high-risk situations and prevent errors in the perioperative period. A threat taxonomy is a novel way of classifying and predicting the hazards that can occur in the operating room. TEM can be used to identify error-producing situations, analyze adverse events, and design training scenarios. Summary TEM offers a multifaceted strategy for identifying hazards, reducing errors, and training physicians. A threat taxonomy may improve analysis of critical events with subsequent development of specific interventions, and may also serve as a framework for training programs in risk mitigation. PMID:24113268

  19. American Bar Association Supplementary Guidelines for the Mitigation Function of Defense Teams in Death Penalty Cases: Implications for Social Work

    ERIC Educational Resources Information Center

    Andrews, Arlene Bowers

    2012-01-01

    When a client faces a penalty of death, defense attorneys may call on social workers in many capacities: mitigation specialist, expert witness, consulting specialist, direct witness, or defense-initiated victim outreach worker. The American Bar Association set forth standards for capital defense attorneys, which led an interdisciplinary team to…

  20. A high-accuracy two-position alignment inertial navigation system for lunar rovers aided by a star sensor with a calibration and positioning function

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang; Liu, Ming

    2016-12-01

    An integrated inertial/celestial navigation system (INS/CNS) has wide applicability in lunar rovers as it provides accurate and autonomous navigational information. Initialization is particularly vital for a INS. This paper proposes a two-position initialization method based on a standard Kalman filter. The difference between the computed star vector and the measured star vector is measured. With the aid of a star sensor and the two positions, the attitudinal and positional errors can be greatly reduced, and the biases of three gyros and accelerometers can also be estimated. The semi-physical simulation results show that the positional and attitudinal errors converge within 0.07″ and 0.1 m, respectively, when the given initial positional error is 1 km and the attitudinal error is 10°. These good results show that the proposed method can accomplish alignment, positioning and calibration functions simultaneously. Thus the proposed two-position initialization method has the potential for application in lunar rover navigation.

  1. Direct MSTID mitigation in precise GPS processing

    NASA Astrophysics Data System (ADS)

    Hernández-Pajares, Manuel; Wielgosz, Pawel; Paziewski, Jacek; Krypiak-Gregorczyk, Anna; Krukowska, Marta; Stepniak, Katarzyna; Kaplon, Jan; Hadas, Tomasz; Sosnica, Krzysztof; Bosy, Jaroslaw; Orus-Perez, Raul; Monte-Moreno, Enric; Yang, Heng; Garcia-Rigo, Alberto; Olivares-Pulido, Germán.

    2017-03-01

    In this paper, the authors summarize a simple and efficient approach developed to mitigate the problem in precise Global Navigation Satellite Systems (GNSS) positioning originated by the most frequent ionospheric wave signatures: the medium-scale traveling ionospheric disturbances (MSTIDs). The direct GNSS Ionospheric Interferometry technique (hereinafter dGII), presented in this paper, is applied for correcting MSTID effects on precise Real Time Kinematic (RTK) and tropospheric determination. It consists of the evolution of the former climatic Differential Delay Mitigation Model for MSTIDs (DMTID), for real-time conditions, using ionospheric data from a single permanent receiver only. The performance is demonstrated with networks of GNSS receivers in Poland, treated as users under real-time conditions, during two representative days in winter and summer seasons (days 353 and 168 of year 2013). In range domain, dGII typically reduces the ionospheric delay error up to 10-90% of the value when the MSTID mitigation model is not applied. The main dGII impact on precise positioning is that we can obtain reliable RTK position faster. In particular, the ambiguity success rate parameter increases, from 74% to 83%, with respect to the original uncorrected observations. The average of time to first fix is shortened from 30 s to 13 s. The improvement in troposphere estimaton, due to any potential impact of the MSTID mitigation model, was most difficult to demonstrate.

  2. Risk mitigation of shared electronic records system in campus institutions: medical social work practice in singapore.

    PubMed

    Ow Yong, Lai Meng; Tan, Amanda Wei Li; Loo, Cecilia Lay Keng; Lim, Esther Li Ping

    2014-10-01

    In 2013, the Singapore General Hospital (SGH) Campus initiated a shared electronic system where patient records and documentations were standardized and shared across institutions within the Campus. The project was initiated to enhance quality of health care, improve accessibility, and ensure integrated (as opposed to fragmented) care for best outcomes in our patients. In mitigating the risks of ICT, it was found that familiarity with guiding ethical principles, and ensuring adherence to regulatory and technical competencies in medical social work were important. The need to negotiate and maneuver in a large environment within the Campus to ensure proactive integrative process helped.

  3. On-line Education Initiatives to Galvanize Climate Mitigation in the Great Lakes Region

    NASA Astrophysics Data System (ADS)

    Mooney, M. E.; Ackerman, S. A.

    2014-12-01

    The Cooperative Institute for Meteorological Satellite Studies (CIMSS) is supporting two different on-line education initiatives that teach about climate change while emphasizing informed and effective responses. The first is an on-line introductory level course for undergraduate students (http://c3.ssec.wisc.edu/) offered through the University of Wisconsin-Madison Atmospheric and Oceanic Sciences (AOS) department. Along with a lighter carbon footprint and the convenience of web-based access, students interact via Drupal forums, Google hangouts and twitter. Activities include several pedagogical tools with sustainability-related content and a final project requiring a discussion of regionally relevant mitigation responses to achieve low emission scenarios for assigned locations. The other initiative is a MOOC (massive open online course) focusing on the changing weather and climate in the Great Lakes Region. This 4-week course is set to launch February 23 2015. One of the primary goals of this MOOC will be having participants change four habits, one per week. Each behavior change will provide a personal benefit to participating individuals while also helping to mitigate the collective impacts of climate change. This presentation will share strategies and insights from both projects.

  4. Scenario analysis of fertilizer management practices for N2O mitigation from corn systems in Canada.

    PubMed

    Abalos, Diego; Smith, Ward N; Grant, Brian B; Drury, Craig F; MacKell, Sarah; Wagner-Riddle, Claudia

    2016-12-15

    Effective management of nitrogen (N) fertilizer application by farmers provides great potential for reducing emissions of the potent greenhouse gas nitrous oxide (N 2 O). However, such potential is rarely achieved because our understanding of what practices (or combination of practices) lead to N 2 O reductions without compromising crop yields remains far from complete. Using scenario analysis with the process-based model DNDC, this study explored the effects of nine fertilizer practices on N 2 O emissions and crop yields from two corn production systems in Canada. The scenarios differed in: timing of fertilizer application, fertilizer rate, number of applications, fertilizer type, method of application and use of nitrification/urease inhibitors. Statistical analysis showed that during the initial calibration and validation stages the simulated results had no significant total error or bias compared to measured values, yet grain yield estimations warrant further model improvement. Sidedress fertilizer applications reduced yield-scaled N 2 O emissions by c. 60% compared to fall fertilization. Nitrification inhibitors further reduced yield-scaled N 2 O emissions by c. 10%; urease inhibitors had no effect on either N 2 O emissions or crop productivity. The combined adoption of split fertilizer application with inhibitors at a rate 10% lower than the conventional application rate (i.e. 150kgNha -1 ) was successful, but the benefits were lower than those achieved with single fertilization at sidedress. Our study provides a comprehensive assessment of fertilizer management practices that enables policy development regarding N 2 O mitigation from agricultural soils in Canada. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Simulations and Visualizations of Hurricane Sandy (2012) as Revealed by the NASA CAMVis

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen

    2013-01-01

    Storm Sandy first appeared as a tropical storm in the southern Caribbean Sea on Oct. 22, 2012, moved northeastward, turned northwestward, and made landfall near Brigantine, New Jersey in late October. Sandy devastated surrounding areas, caused an estimated damage of $50 billion, and became the second costliest tropical cyclone (TC) in U.S. History surpassed only by Hurricane Katrina (2005). To save lives and mitigate economic damage, a central question to be addressed is to what extent the lead time of severe storm prediction such as Sandy can be extended (e.g., Emanuel 2012; Kerr 2012). In this study, we present 10 numerical experiments initialized at 00 and 1200 UTC Oct. 22-26, 2012, with the NASA coupled advanced global modeling and visualization systems (CAMVis). All of the predictions realistically capture Sandy's movement with the northwestward turn prior to its landfall. However, three experiments (initialized at 0000 UTC Oct. 22 and 24 and 1200 UTC Oct. 22) produce larger errors. Among the 10 experiments, the control run initialized at 0000 UTC Oct. 23 produces a remarkable 7-day forecast. To illustrate the impact of environmental flows on the predictability of Sandy, we produce and discuss four-dimensional (4-D) visualizations with the control run. 4-D visualizations clearly demonstrate the following multiscale processes that led to the sinuous track of Sandy: the initial steering impact of an upper-level trough (appearing over the northwestern Caribbean Sea and Gulf of Mexico), the blocking impact of systems to the northeast of Sandy, and the binary interaction with a mid-latitude, upper-level trough that appeared at 130degrees west longitude on Oct. 23, moved to the East Coast and intensified during the period of Oct. 29-30 prior to Sandy's landfall.

  6. Debris mitigation measures by satellite design and operational methods - Findings from the DLR space debris End-to-End Service

    NASA Astrophysics Data System (ADS)

    Sdunnus, H.; Beltrami, P.; Janovsky, R.; Koppenwallner, G.; Krag, H.; Reimerdes, H.; Schäfer, F.

    Debris Mitigation has been recognised as an issue to be addressed by the space faring nations around the world. Currently, there are various activities going on, aiming at the establishment of debris mitigation guidelines on various levels, reaching from the UN down to national space agencies. Though guidelines established on the national level already provide concrete information how things should be done (rather that specifying what should be done or providing fundamental principles) potential users of the guidelines will still have the need to explore the technical, management, and financial implications of the guidelines for their projects. Those questions are addressed by the so called "Space Debris End-to-End Service" project, which has been initiated as a national initiative of the German Aerospace Centre (DLR). Based on a review of already existing mitigation guidelines or guidelines under development and following an identification of needs from a circle of industrial users the "End-to-End Service Gu idelines" have been established for designer and operators of spacecraft. The End-to-End Service Guidelines are based on requirements addressed by the mitigation guidelines and provide recommendations how and when the technical consideration of the mitigation guidelines should take place. By referencing requirements from the mitigation guidelines, the End-to-End Service Guidelines address the consideration of debris mitigation measures by spacecraft design and operational measures. This paper will give an introduction to the End-to-End Service Guidelines. It will focus on the proposals made for mitigation measures by the S/C system design, i.e. on protective design measures inside the spacecraft and on design measures, e.g. innovative protective (shielding) systems. Furthermore, approaches on the analytical optimisation of protective systems will be presented, aiming at the minimisation of shield mass under conservation of the protective effects. On the operational side, the possibility to support mitgation measures supported through radar observation will be addressed as well as measures to minimise the risk during the satellite reentry phase by the choice of proper reentry parameters and spacecraft materials and design options.

  7. Possible sources of forecast errors generated by the global/regional assimilation and prediction system for landfalling tropical cyclones. Part I: Initial uncertainties

    NASA Astrophysics Data System (ADS)

    Zhou, Feifan; Yamaguchi, Munehiko; Qin, Xiaohao

    2016-07-01

    This paper investigates the possible sources of errors associated with tropical cyclone (TC) tracks forecasted using the Global/Regional Assimilation and Prediction System (GRAPES). The GRAPES forecasts were made for 16 landfalling TCs in the western North Pacific basin during the 2008 and 2009 seasons, with a forecast length of 72 hours, and using the default initial conditions ("initials", hereafter), which are from the NCEP-FNL dataset, as well as ECMWF initials. The forecasts are compared with ECMWF forecasts. The results show that in most TCs, the GRAPES forecasts are improved when using the ECMWF initials compared with the default initials. Compared with the ECMWF initials, the default initials produce lower intensity TCs and a lower intensity subtropical high, but a higher intensity South Asia high and monsoon trough, as well as a higher temperature but lower specific humidity at the TC center. Replacement of the geopotential height and wind fields with the ECMWF initials in and around the TC center at the initial time was found to be the most efficient way to improve the forecasts. In addition, TCs that showed the greatest improvement in forecast accuracy usually had the largest initial uncertainties in TC intensity and were usually in the intensifying phase. The results demonstrate the importance of the initial intensity for TC track forecasts made using GRAPES, and indicate the model is better in describing the intensifying phase than the decaying phase of TCs. Finally, the limit of the improvement indicates that the model error associated with GRAPES forecasts may be the main cause of poor forecasts of landfalling TCs. Thus, further examinations of the model errors are required.

  8. The Relationship between Occurrence Timing of Dispensing Errors and Subsequent Danger to Patients under the Situation According to the Classification of Drugs by Efficacy.

    PubMed

    Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro

    2016-01-01

    There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.

  9. Evolution of errors in the altimetric bathymetry model used by Google Earth and GEBCO

    NASA Astrophysics Data System (ADS)

    Marks, K. M.; Smith, W. H. F.; Sandwell, D. T.

    2010-09-01

    We analyze errors in the global bathymetry models of Smith and Sandwell that combine satellite altimetry with acoustic soundings and shorelines to estimate depths. Versions of these models have been incorporated into Google Earth and the General Bathymetric Chart of the Oceans (GEBCO). We use Japan Agency for Marine-Earth Science and Technology (JAMSTEC) multibeam surveys not previously incorporated into the models as "ground truth" to compare against model versions 7.2 through 12.1, defining vertical differences as "errors." Overall error statistics improve over time: 50th percentile errors declined from 57 to 55 to 49 m, and 90th percentile errors declined from 257 to 235 to 219 m, in versions 8.2, 11.1 and 12.1. This improvement is partly due to an increasing number of soundings incorporated into successive models, and partly to improvements in the satellite gravity model. Inspection of specific sites reveals that changes in the algorithms used to interpolate across survey gaps with altimetry have affected some errors. Versions 9.1 through 11.1 show a bias in the scaling from gravity in milliGals to topography in meters that affected the 15-160 km wavelength band. Regionally averaged (>160 km wavelength) depths have accumulated error over successive versions 9 through 11. These problems have been mitigated in version 12.1, which shows no systematic variation of errors with depth. Even so, version 12.1 is in some respects not as good as version 8.2, which employed a different algorithm.

  10. Simulation and the future of military medicine.

    PubMed

    Leitch, Robert A; Moses, Gerald R; Magee, Harvey

    2002-04-01

    The U.S. military currently faces serious difficulties in training medical personnel in peacetime for the tasks of war. The military beneficiary population comprises fit young service men and women, their dependents, and retirees. Their peacetime care, although vital, does little to prepare military medical personnel for war. Medical commanders have instituted an array of training programs to compensate for this shortfall, but there remains a large gap between operational medical needs and training opportunities in peacetime. The military has begun to examine whether simulation can fill this gap. An array of commercial, off-the-shelf technologies are already being used with varying degrees of success, and major initiatives are under way in both academia and industry, supported by the military, to develop virtual reality products for combat medical training. Even as the military exploits emerging technology and begins to articulate a simulation strategy, there is a growing interest in civilian medicine in the potential for simulation to affect patient safety--how medical simulation might mitigate the injuries and deaths caused by medical errors--and how it might also improve the quality of medical education and training.

  11. The Greenwich Photo-heliographic Results (1874 - 1976): Initial Corrections to the Printed Publications

    NASA Astrophysics Data System (ADS)

    Erwin, E. H.; Coffey, H. E.; Denig, W. F.; Willis, D. M.; Henwood, R.; Wild, M. N.

    2013-11-01

    A new sunspot and faculae digital dataset for the interval 1874 - 1955 has been prepared under the auspices of the NOAA National Geophysical Data Center (NGDC). This digital dataset contains measurements of the positions and areas of both sunspots and faculae published initially by the Royal Observatory, Greenwich, and subsequently by the Royal Greenwich Observatory (RGO), under the title Greenwich Photo-heliographic Results ( GPR) , 1874 - 1976. Quality control (QC) procedures based on logical consistency have been used to identify the more obvious errors in the RGO publications. Typical examples of identifiable errors are North versus South errors in specifying heliographic latitude, errors in specifying heliographic (Carrington) longitude, errors in the dates and times, errors in sunspot group numbers, arithmetic errors in the summation process, and the occasional omission of solar ephemerides. Although the number of errors in the RGO publications is remarkably small, an initial table of necessary corrections is provided for the interval 1874 - 1917. Moreover, as noted in the preceding companion papers, the existence of two independently prepared digital datasets, which both contain information on sunspot positions and areas, makes it possible to outline a preliminary strategy for the development of an even more accurate digital dataset. Further work is in progress to generate an extremely reliable sunspot digital dataset, based on the long programme of solar observations supported first by the Royal Observatory, Greenwich, and then by the Royal Greenwich Observatory.

  12. Mitigating IASCC of Reactor Core Internals by Post-Irradiation Annealing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Was, Gary

    This final report summarizes research performed during the period between September 2012 and December 2016, with the objective of establishing the effectiveness of post-irradiation annealing (PIA) as an advanced mitigation strategy for irradiation-assisted stress corrosion cracking (IASCC). This was completed by using irradiated 304SS control blade material to conduct crack initiation and crack growth rate (CGR) experiments in simulated BWR environment. The mechanism by which PIA affects IASCC susceptibility will also be verified. The success of this project will provide a foundation for the use of PIA as a mitigation strategy for core internal components in commercial reactors.

  13. The Sources of Error in Spanish Writing.

    ERIC Educational Resources Information Center

    Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.

    1999-01-01

    Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)

  14. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    DOE PAGES

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; ...

    2018-02-12

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. Here, we use a superconducting-qubit-based processor to apply the QSE approach to the H 2 molecule, extracting both groundmore » and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.« less

  15. Single-Frequency GPS Relative Navigation in a High Ionosphere Orbital Environment

    NASA Technical Reports Server (NTRS)

    Conrad, Patrick R.; Naasz, Bo J.

    2007-01-01

    The Global Positioning System (GPS) provides a convenient source for space vehicle relative navigation measurements, especially for low Earth orbit formation flying and autonomous rendezvous mission concepts. For single-frequency GPS receivers, ionospheric path delay can be a significant error source if not properly mitigated. In particular, ionospheric effects are known to cause significant radial position error bias and add dramatically to relative state estimation error if the onboard navigation software does not force the use of measurements from common or shared GPS space vehicles. Results from GPS navigation simulations are presented for a pair of space vehicles flying in formation and using GPS pseudorange measurements to perform absolute and relative orbit determination. With careful measurement selection techniques relative state estimation accuracy to less than 20 cm with standard GPS pseudorange processing and less than 10 cm with single-differenced pseudorange processing is shown.

  16. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    NASA Astrophysics Data System (ADS)

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; Blok, M. S.; Kimchi-Schwartz, M. E.; McClean, J. R.; Carter, J.; de Jong, W. A.; Siddiqi, I.

    2018-02-01

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H2 molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.

  17. Performance evaluation of FSO system using wavelength and time diversity over malaga turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Balaji, K. A.; Prabu, K.

    2018-03-01

    There is an immense demand for high bandwidth and high data rate systems, which is fulfilled by wireless optical communication or free space optics (FSO). Hence FSO gained a pivotal role in research which has a added advantage of both cost-effective and licence free huge bandwidth. Unfortunately the optical signal in free space suffers from irradiance and phase fluctuations due to atmospheric turbulence and pointing errors which deteriorates the signal and degrades the performance of communication system over longer distance which is undesirable. In this paper, we have considered polarization shift keying (POLSK) system applied with wavelength and time diversity technique over Malaga(M)distribution to mitigate turbulence induced fading. We derived closed form mathematical expressions for estimating the systems outage probability and average bit error rate (BER). Ultimately from the results we can infer that wavelength and time diversity schemes enhances these systems performance.

  18. Psychophysical measurements in children: challenges, pitfalls, and considerations.

    PubMed

    Witton, Caroline; Talcott, Joel B; Henning, G Bruce

    2017-01-01

    Measuring sensory sensitivity is important in studying development and developmental disorders. However, with children, there is a need to balance reliable but lengthy sensory tasks with the child's ability to maintain motivation and vigilance. We used simulations to explore the problems associated with shortening adaptive psychophysical procedures, and suggest how these problems might be addressed. We quantify how adaptive procedures with too few reversals can over-estimate thresholds, introduce substantial measurement error, and make estimates of individual thresholds less reliable. The associated measurement error also obscures group differences. Adaptive procedures with children should therefore use as many reversals as possible, to reduce the effects of both Type 1 and Type 2 errors. Differences in response consistency, resulting from lapses in attention, further increase the over-estimation of threshold. Comparisons between data from individuals who may differ in lapse rate are therefore problematic, but measures to estimate and account for lapse rates in analyses may mitigate this problem.

  19. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. Here, we use a superconducting-qubit-based processor to apply the QSE approach to the H 2 molecule, extracting both groundmore » and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.« less

  20. Error Detection-Based Model to Assess Educational Outcomes in Crisis Resource Management Training: A Pilot Study.

    PubMed

    Bouhabel, Sarah; Kay-Rivest, Emily; Nhan, Carol; Bank, Ilana; Nugus, Peter; Fisher, Rachel; Nguyen, Lily Hp

    2017-06-01

    Otolaryngology-head and neck surgery (OTL-HNS) residents face a variety of difficult, high-stress situations, which may occur early in their training. Since these events occur infrequently, simulation-based learning has become an important part of residents' training and is already well established in fields such as anesthesia and emergency medicine. In the domain of OTL-HNS, it is gradually gaining in popularity. Crisis Resource Management (CRM), a program adapted from the aviation industry, aims to improve outcomes of crisis situations by attempting to mitigate human errors. Some examples of CRM principles include cultivating situational awareness; promoting proper use of available resources; and improving rapid decision making, particularly in high-acuity, low-frequency clinical situations. Our pilot project sought to integrate CRM principles into an airway simulation course for OTL-HNS residents, but most important, it evaluated whether learning objectives were met, through use of a novel error identification model.

  1. The Effect of Antenna Position Errors on Redundant-Baseline Calibration of HERA

    NASA Astrophysics Data System (ADS)

    Orosz, Naomi; Dillon, Joshua; Ewall-Wice, Aaron; Parsons, Aaron; HERA Collaboration

    2018-01-01

    HERA (the Hydrogen Epoch of Reionization Array) is a large, highly-redundant radio interferometer in South Africa currently being built out to 350 14-m dishes. Its mission is to probe large scale structure during and prior to the epoch of reionization using the 21 cm hyperfine transition of neutral hydrogen. The array is designed to be calibrated using redundant baselines of known lengths. However, the dishes can deviate from ideal positions, with errors on the order of a few centimeters. This potentially increases foreground contamination of the 21 cm power spectrum in the cleanest part of Fourier space. The calibration algorithm treats groups of baselines that should be redundant, but are not due to position errors, as if they actually are. Accurate, precise calibration is critical because the foreground signals are 100,000 times stronger than the reionization signal. We explain the origin of this effect and discuss weighting strategies to mitigate it.

  2. Neural Correlates of User-initiated Motor Success and Failure - A Brain-Computer Interface Perspective.

    PubMed

    Yazmir, Boris; Reiner, Miriam

    2018-05-15

    Any motor action is, by nature, potentially accompanied by human errors. In order to facilitate development of error-tailored Brain-Computer Interface (BCI) correction systems, we focused on internal, human-initiated errors, and investigated EEG correlates of user outcome successes and errors during a continuous 3D virtual tennis game against a computer player. We used a multisensory, 3D, highly immersive environment. Missing and repelling the tennis ball were considered, as 'error' (miss) and 'success' (repel). Unlike most previous studies, where the environment "encouraged" the participant to perform a mistake, here errors happened naturally, resulting from motor-perceptual-cognitive processes of incorrect estimation of the ball kinematics, and can be regarded as user internal, self-initiated errors. Results show distinct and well-defined Event-Related Potentials (ERPs), embedded in the ongoing EEG, that differ across conditions by waveforms, scalp signal distribution maps, source estimation results (sLORETA) and time-frequency patterns, establishing a series of typical features that allow valid discrimination between user internal outcome success and error. The significant delay in latency between positive peaks of error- and success-related ERPs, suggests a cross-talk between top-down and bottom-up processing, represented by an outcome recognition process, in the context of the game world. Success-related ERPs had a central scalp distribution, while error-related ERPs were centro-parietal. The unique characteristics and sharp differences between EEG correlates of error/success provide the crucial components for an improved BCI system. The features of the EEG waveform can be used to detect user action outcome, to be fed into the BCI correction system. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. The Success and Failure of Positive Action To Mitigate the Effects of an All-Male Senior Management Team in a Secondary School.

    ERIC Educational Resources Information Center

    Coldron, John; Boulton, Pam

    1998-01-01

    Considers the effectiveness of an initiative in a British secondary school taken to mitigate the effects of an all-male senior management team. Draws a partial balance sheet of its successes and failures, especially as related to effects on female school staff, and to friction in dealing with traditional structures. (DSK)

  4. Exploring the initial steps of the testing process: frequency and nature of pre-preanalytic errors.

    PubMed

    Carraro, Paolo; Zago, Tatiana; Plebani, Mario

    2012-03-01

    Few data are available on the nature of errors in the so-called pre-preanalytic phase, the initial steps of the testing process. We therefore sought to evaluate pre-preanalytic errors using a study design that enabled us to observe the initial procedures performed in the ward, from the physician's test request to the delivery of specimens in the clinical laboratory. After a 1-week direct observational phase designed to identify the operating procedures followed in 3 clinical wards, we recorded all nonconformities and errors occurring over a 6-month period. Overall, the study considered 8547 test requests, for which 15 917 blood sample tubes were collected and 52 982 tests undertaken. No significant differences in error rates were found between the observational phase and the overall study period, but underfilling of coagulation tubes was found to occur more frequently in the direct observational phase (P = 0.043). In the overall study period, the frequency of errors was found to be particularly high regarding order transmission [29 916 parts per million (ppm)] and hemolysed samples (2537 ppm). The frequency of patient misidentification was 352 ppm, and the most frequent nonconformities were test requests recorded in the diary without the patient's name and failure to check the patient's identity at the time of blood draw. The data collected in our study confirm the relative frequency of pre-preanalytic errors and underline the need to consensually prepare and adopt effective standard operating procedures in the initial steps of laboratory testing and to monitor compliance with these procedures over time.

  5. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system

    PubMed Central

    Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.

    2015-01-01

    Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702

  6. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  7. Phase stabilization of multidimensional amplification architectures for ultrashort pulses

    NASA Astrophysics Data System (ADS)

    Müller, M.; Kienel, M.; Klenke, A.; Eidam, T.; Limpert, J.; Tünnermann, A.

    2015-03-01

    The active phase stabilization of spatially and temporally combined ultrashort pulses is investigated theoretically and experimentally. Particularly, considering a combining scheme applying 2 amplifier channels and 4 divided-pulse replicas a bistable behavior is observed. The reason is mutual influence of the optical error signals that is intrinsic to temporal polarization beam combining. A successful mitigation strategy is proposed and is analyzed theoretically and experimentally.

  8. Command Process Modeling & Risk Analysis

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila

    2011-01-01

    Commanding Errors may be caused by a variety of root causes. It's important to understand the relative significance of each of these causes for making institutional investment decisions. One of these causes is the lack of standardized processes and procedures for command and control. We mitigate this problem by building periodic tables and models corresponding to key functions within it. These models include simulation analysis and probabilistic risk assessment models.

  9. THE NONLINEAR RESPONSE OF NITRATE REPLACEMENT THAT MITIGATES SULFATE REDUCTIONS: THE GAS RATIO AS AN INDICATOR AND SENSITIVITY TO ERRORS IN TOTAL AMMONIA AND TOTAL NITRATE

    EPA Science Inventory

    The poster presents an assessment, using the CMAQ air quality model, showing the inorganic gas ratio (the ratio of free ammonia to total nitrate) can function as a screening indicator of the winter replacement of sulfate by nitrate when sulfate is reduced. It also presents an as...

  10. Method of Error Floor Mitigation in Low-Density Parity-Check Codes

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon (Inventor)

    2014-01-01

    A digital communication decoding method for low-density parity-check coded messages. The decoding method decodes the low-density parity-check coded messages within a bipartite graph having check nodes and variable nodes. Messages from check nodes are partially hard limited, so that every message which would otherwise have a magnitude at or above a certain level is re-assigned to a maximum magnitude.

  11. Family Interference with Work and Workplace Cognitive Failure: The Mitigating Role of Recovery Experiences

    ERIC Educational Resources Information Center

    Lapierre, Laurent M.; Hammer, Leslie B.; Truxillo, Donald M.; Murphy, Lauren A.

    2012-01-01

    The first goal of this study was to test whether family interference with work (FIW) is positively related to increased workplace cognitive failure (WCF), which is defined as errors made at work that indicate lapses in memory (e.g., failing to recall work procedures), attention (e.g., not fully listening to instruction), and motor function (e.g.,…

  12. A geometric model for initial orientation errors in pigeon navigation.

    PubMed

    Postlethwaite, Claire M; Walker, Michael M

    2011-01-21

    All mobile animals respond to gradients in signals in their environment, such as light, sound, odours and magnetic and electric fields, but it remains controversial how they might use these signals to navigate over long distances. The Earth's surface is essentially two-dimensional, so two stimuli are needed to act as coordinates for navigation. However, no environmental fields are known to be simple enough to act as perpendicular coordinates on a two-dimensional grid. Here, we propose a model for navigation in which we assume that an animal has a simplified 'cognitive map' in which environmental stimuli act as perpendicular coordinates. We then investigate how systematic deviation of the contour lines of the environmental signals from a simple orthogonal arrangement can cause errors in position determination and lead to systematic patterns of directional errors in initial homing directions taken by pigeons. The model reproduces patterns of initial orientation errors seen in previously collected data from homing pigeons, predicts that errors should increase with distance from the loft, and provides a basis for efforts to identify further sources of orientation errors made by homing pigeons. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Evaluation of potential severe accidents during low power and shutdown operations at Surry, Unit 1: Analysis of core damage frequency from internal events during mid-loop operations, Appendices A--D. Volume 2, Part 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, T.L.; Musicki, Z.; Kohut, P.

    1994-06-01

    During 1989, the Nuclear Regulatory Commission (NRC) initiated an extensive program to carefully examine the Potential risks during low Power and shutdown operations. The program includes two parallel projects being performed by Brookhaven National Laboratory (BNL) and Sandia National Laboratories (SNL). Two plants, Surry (pressurized water reactor) and Grand Gulf (boiling water reactor), were selected as the Plants to be studied. The objectives of the program are to assess the risks of severe accidents initiated during plant operational states other than full power operation and to compare the estimated core damage frequencies, important accident sequences and other qualitative and quantitativemore » results with those accidents initiated during full power operation as assessed in NUREG-1150. The objective of this report is to document the approach utilized in the Surry plant and discuss the results obtained. A parallel report for the Grand Gulf plant is prepared by SNL. This study shows that the core-damage frequency during mid-loop operation at the Surry plant is comparable to that of power operation. We recognize that there is very large uncertainty in the human error probabilities in this study. This study identified that only a few procedures are available for mitigating accidents that may occur during shutdown. Procedures written specifically for shutdown accidents would be useful. This document, Volume 2, Pt. 2 provides appendices A through D of this report.« less

  14. Effects of control parameters of three-point initiation on the formation of an explosively formed projectile with fins

    NASA Astrophysics Data System (ADS)

    Li, R.; Li, W. B.; Wang, X. M.; Li, W. B.

    2018-03-01

    The effects of the initiation diameter and synchronicity error on the formation of fins and stable-flight velocity of an explosively formed projectile (EFP) with three-point initiation are investigated. The pressure and area of the Mach wave acting on the metal liner at different initiation diameters are calculated employing the Whitham method. LS-DYNA software is used to investigate the asymmetric collision of detonation waves resulting from three-point initiation synchronicity error, the distortion characteristics of the liner resulting from the composite detonation waves, and the performance parameters of the EFP with fins. Results indicate that deviations of the Y-shaped high-pressure zone and central ultrahigh-pressure zone from the liner center can be attributed to the error of three-point initiation, which leads to the irregular formation of EFP fins. It is noted that the area of the Mach wave decreases, but the pressure of the Mach wave and the final speed and length-to-diameter ( L/ D) ratio of the EFP increase, benefiting the formation of the EFP fins, as the initiation diameter increases.

  15. The Sustained Influence of an Error on Future Decision-Making.

    PubMed

    Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel

    2017-01-01

    Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  16. Scattering properties of ultrafast laser-induced refractive index shaping lenticular structures in hydrogels

    NASA Astrophysics Data System (ADS)

    Wozniak, Kaitlin T.; Germer, Thomas A.; Butler, Sam C.; Brooks, Daniel R.; Huxlin, Krystel R.; Ellis, Jonathan D.

    2018-02-01

    We present measurements of light scatter induced by a new ultrafast laser technique being developed for laser refractive correction in transparent ophthalmic materials such as cornea, contact lenses, and/or intraocular lenses. In this new technique, called intra-tissue refractive index shaping (IRIS), a 405 nm femtosecond laser is focused and scanned below the corneal surface, inducing a spatially-varying refractive index change that corrects vision errors. In contrast with traditional laser correction techniques, such as laser in-situ keratomileusis (LASIK) or photorefractive keratectomy (PRK), IRIS does not operate via photoablation, but rather changes the refractive index of transparent materials such as cornea and hydrogels. A concern with any laser eye correction technique is additional scatter induced by the process, which can adversely affect vision, especially at night. The goal of this investigation is to identify sources of scatter induced by IRIS and to mitigate possible effects on visual performance in ophthalmic applications. Preliminary light scattering measurements on patterns written into hydrogel showed four sources of scatter, differentiated by distinct behaviors: (1) scattering from scanned lines; (2) scattering from stitching errors, resulting from adjacent scanning fields not being aligned to one another; (3) diffraction from Fresnel zone discontinuities; and (4) long-period variations in the scans that created distinct diffraction peaks, likely due to inconsistent line spacing in the writing instrument. By knowing the nature of these different scattering errors, it will now be possible to modify and optimize the design of IRIS structures to mitigate potential deficits in visual performance in human clinical trials.

  17. Hospital staff registered nurses' perception of horizontal violence, peer relationships, and the quality and safety of patient care.

    PubMed

    Purpora, Christina; Blegen, Mary A; Stotts, Nancy A

    2015-01-01

    To test hypotheses from a horizontal violence and quality and safety of patient care model: horizontal violence (negative behavior among peers) is inversely related to peer relations, quality of care and it is positively related to errors and adverse events. Additionally, the association between horizontal violence, peer relations, quality of care, errors and adverse events, and nurse and work characteristics were determined. A random sample (n= 175) of hospital staff Registered Nurses working in California. Nurses participated via survey. Bivariate and multivariate analyses tested the study hypotheses. Hypotheses were supported. Horizontal violence was inversely related to peer relations and quality of care, and positively related to errors and adverse events. Including peer relations in the analyses altered the relationship between horizontal violence and quality of care but not between horizontal violence, errors and adverse events. Nurse and hospital characteristics were not related to other variables. Clinical area contributed significantly in predicting the quality of care, errors and adverse events but not peer relationships. Horizontal violence affects peer relationships and the quality and safety of patient care as perceived by participating nurses. Supportive peer relationships are important to mitigate the impact of horizontal violence on quality of care.

  18. A Starshade Petal Error Budget for Exo-Earth Detection and Characterization

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; Marchen, Luis; Lisman, P. Douglas; Cady, Eric; Martin, Stefan; Thomson, Mark; Dumont, Philip; Kasdin, N. Jeremy

    2011-01-01

    We present a starshade error budget with engineering requirements that are well within the current manufacturing and metrology capabilities. The error budget is based on an observational scenario in which the starshade spins about its axis on timescales short relative to the zodi-limited integration time, typically several hours. The scatter from localized petal errors is smoothed into annuli around the center of the image plane, resulting in a large reduction in the background flux variation while reducing thermal gradients caused by structural shadowing. Having identified the performance sensitivity to petal shape errors with spatial periods of 3-4 cycles/petal as the most challenging aspect of the design, we have adopted and modeled a manufacturing approach that mitigates these perturbations with 1-meter-long precision edge segments positioned using commercial metrology that readily meets assembly requirements. We have performed detailed thermal modeling and show that the expected thermal deformations are well within the requirements as well. We compare the requirements for four cases: a 32 meter diameter starshade with a 1.5 meter telescope, analyzed at 75 and 90 milliarcseconds, and a 40 meter diameter starshade with a 4 meter telescope, analyzed at 60 and 75 milliarcseconds.

  19. Measurement uncertainty relations: characterising optimal error bounds for qubits

    NASA Astrophysics Data System (ADS)

    Bullock, T.; Busch, P.

    2018-07-01

    In standard formulations of the uncertainty principle, two fundamental features are typically cast as impossibility statements: two noncommuting observables cannot in general both be sharply defined (for the same state), nor can they be measured jointly. The pioneers of quantum mechanics were acutely aware and puzzled by this fact, and it motivated Heisenberg to seek a mitigation, which he formulated in his seminal paper of 1927. He provided intuitive arguments to show that the values of, say, the position and momentum of a particle can at least be unsharply defined, and they can be measured together provided some approximation errors are allowed. Only now, nine decades later, a working theory of approximate joint measurements is taking shape, leading to rigorous and experimentally testable formulations of associated error tradeoff relations. Here we briefly review this new development, explaining the concepts and steps taken in the construction of optimal joint approximations of pairs of incompatible observables. As a case study, we deduce measurement uncertainty relations for qubit observables using two distinct error measures. We provide an operational interpretation of the error bounds and discuss some of the first experimental tests of such relations.

  20. Measurement error in environmental epidemiology and the shape of exposure-response curves.

    PubMed

    Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E

    2011-09-01

    Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health.

  1. Mitigation of Atmospheric Errors in Differential InSAR Data Using a High-Resolution Weather Model, Mauna Loa and Kilauea volcanoes, Hawaii

    NASA Astrophysics Data System (ADS)

    Brooks, B. A.; Foster, J.; Cherubini, T.; Businger, S.; Miklius, A.; Mouginis-Mark, P.

    2004-12-01

    We investigate the utility of a mesoscale weather model for mitigating atmospheric errors in InSAR-derived displacement fields associated with volcanotectonic phenomena at Mauna Loa and Kilauea volcanoes on the Island of Hawaii. The MM5 (NCAR-Penn State Mesoscale Model Version 5) forecast is run twice daily for the island of Hawaii by the Mauna Kea Weather Center in support of Mauna Kea's astronomical observatories. MM5 has a 60-hour forecast window and the forecast fields are output in 3-hour increments. A high-resolution analysis that incorporates weather observations from National Weather Service and satellite-derived winds from the University of Wisconsin, provides the initial conditions for MM5. In turn, MM5 produces predictions of thermodynamic properties of the atmosphere, including temperature, pressure, and moisture fields at a 3 km horizontal resolution. The vertical resolution is density weighted with the greatest vertical resolution (10s of meters) near the surface. For any radar image acquisition there is a high-resolution 3D simulation of atmospheric water vapor valid within 1.5 hours of the acquisition time and predicted forward no more than 12 hours from the initial observations. Using MM5 forecast water vapor, we create line-of-sight delay maps that can be either directly removed from InSAR differential pairs or used to create synthetic radar interferograms that can be compared with the observed interferogram. We analyze Envisat ASAR radar data collected during 2003-2004 and find, qualitatively, that contours of excess path delay in MM5 model runs often closely mimic both topographic contours and fringes observed in differential interferograms which have had topographic phase removed. Over periods of days, the delays can vary in magnitude and spatial scale by 10s of mms and 10s of kms, respectively. Individual models can predict peak delays associated with moist layer inversions of order ~20 mm around the summits of Mauna Loa and Kilauea, both active volcanoes. Similar delays are also associated with the inactive Mauna Kea summit. These results are particularly pertinent for using InSAR to track accurately the history of summit-related magmatic phenomena between scene acquisitions. The path delays would introduce a significant atmosphere-related bias if included in a deformation analysis. To validate the technique, we compare total delays and precipitable water data from more than 20 continuous GPS (CGPS) sites with those estimated by the MM5 model. We investigate the technique's efficacy by inverting InSAR-derived line-of-sight displacement fields for the time-varying characteristics of a shallow magma chamber below Mauna Loa's summit caldera. Finally, we compare these results with independent inversion of summit CGPS data.

  2. Optimal error analysis of the intraseasonal convection due to uncertainties of the sea surface temperature in a coupled model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojing; Tang, Youmin; Yao, Zhixiong

    2017-04-01

    The predictability of the convection related to the Madden-Julian Oscillation (MJO) is studied using a coupled model CESM (Community Earth System Model) and the climatically relevant singular vector (CSV) approach. The CSV approach is an ensemble-based strategy to calculate the optimal initial error on climate scale. In this study, we focus on the optimal initial error of the sea surface temperature in Indian Ocean, where is the location of the MJO onset. Six MJO events are chosen from the 10 years model simulation output. The results show that the large values of the SVs are mainly located in the bay of Bengal and the south central IO (around (25°S, 90°E)), which is a meridional dipole-like pattern. The fast error growth of the CSVs have important impacts on the prediction of the convection related to the MJO. The initial perturbations with the SV pattern result in the deep convection damping more quickly in the east Pacific Ocean. Moreover, the sensitivity studies of the CSVs show that different initial fields do not affect the CSVs obviously, while the perturbation domain is a more responsive factor to the CSVs. The rapid growth of the CSVs is found to be related to the west bay of Bengal, where the wind stress starts to be perturbed due to the CSV initial error. These results contribute to the establishment of an ensemble prediction system, as well as the optimal observation network. In addition, the analysis of the error growth can provide us some enlightment about the relationship between SST and the intraseasonal convection related to the MJO.

  3. Ensemble Data Assimilation of Photovoltaic Power Information in the Convection-permitting High-Resolution Model COSMO-DE

    NASA Astrophysics Data System (ADS)

    Declair, Stefan; Saint-Drenan, Yves-Marie; Potthast, Roland

    2017-04-01

    Determining the amount of weather dependent renewable energy is a demanding task for transmission system operators (TSOs) and wind and photovoltaic (PV) prediction errors require the use of reserve power, which generate costs and can - in extreme cases - endanger the security of supply. In the project EWeLiNE funded by the German government, the German Weather Service and the Fraunhofer Institute on Wind Energy and Energy System Technology develop innovative weather- and power forecasting models and tools for grid integration of weather dependent renewable energy. The key part in energy prediction process chains is the numerical weather prediction (NWP) system. Irradiation forecasts from NWP systems are however subject to several sources of error. For PV power prediction, weaknesses of the NWP model to correctly forecast i.e. low stratus, absorption of condensed water or aerosol optical depths are the main sources of errors. Inaccurate radiation schemes (i.e. the two-stream parametrization) are also known as a deficit of NWP systems with regard to irradiation forecast. To mitigate errors like these, latest observations can be used in a pre-processing technique called data assimilation (DA). In DA, not only the initial fields are provided, but the model is also synchronized with reality - the observations - and hence forecast errors are reduced. Besides conventional observation networks like radiosondes, synoptic observations or air reports of wind, pressure and humidity, the number of observations measuring meteorological information indirectly by means of remote sensing such as satellite radiances, radar reflectivities or GPS slant delays strongly increases. Numerous PV plants installed in Germany potentially represent a dense meteorological network assessing irradiation through their power measurements. Forecast accuracy may thus be enhanced by extending the observations in the assimilation by this new source of information. PV power plants can provide information on clouds, aerosol optical depth or low stratus in terms of remote sensing: the power output is strongly dependent on perturbations along the slant between sun position and PV panel. Since these data are not limited to the vertical column above or below the detector, it may thus complement satellite data and compensate weaknesses in the radiation scheme. In this contribution, the used DA technique (Local Ensemble Transform Kalman Filter, LETKF) is shortly sketched. Furthermore, the computation of the model power equivalents is described and first results are presented and discussed.

  4. Linking models and data on vegetation structure

    NASA Astrophysics Data System (ADS)

    Hurtt, G. C.; Fisk, J.; Thomas, R. Q.; Dubayah, R.; Moorcroft, P. R.; Shugart, H. H.

    2010-06-01

    For more than a century, scientists have recognized the importance of vegetation structure in understanding forest dynamics. Now future satellite missions such as Deformation, Ecosystem Structure, and Dynamics of Ice (DESDynI) hold the potential to provide unprecedented global data on vegetation structure needed to reduce uncertainties in terrestrial carbon dynamics. Here, we briefly review the uses of data on vegetation structure in ecosystem models, develop and analyze theoretical models to quantify model-data requirements, and describe recent progress using a mechanistic modeling approach utilizing a formal scaling method and data on vegetation structure to improve model predictions. Generally, both limited sampling and coarse resolution averaging lead to model initialization error, which in turn is propagated in subsequent model prediction uncertainty and error. In cases with representative sampling, sufficient resolution, and linear dynamics, errors in initialization tend to compensate at larger spatial scales. However, with inadequate sampling, overly coarse resolution data or models, and nonlinear dynamics, errors in initialization lead to prediction error. A robust model-data framework will require both models and data on vegetation structure sufficient to resolve important environmental gradients and tree-level heterogeneity in forest structure globally.

  5. Influence of wheelchair front caster wheel on reverse directional stability.

    PubMed

    Guo, Songfeng; Cooper, Rory A; Corfman, Tom; Ding, Dan; Grindle, Garrett

    2003-01-01

    The purpose of this research was to study directional stability during reversing of rear-wheel drive, electric powered wheelchairs (EPW) under different initial front caster orientations. Specifically, the weight distribution differences caused by certain initial caster orientations were examined as a possible mechanism for causing directional instability that could lead to accidents. Directional stability was quantified by measuring the drive direction error of the EPW by a motion analysis system. The ground reaction forces were collected to determine the load on the front casters, as well as back-emf data to attain the speed of the motors. The drive direction error was found to be different for various initial caster orientations. Drive direction error was greatest when both casters were oriented 90 degrees to the left or right, and least when both casters were oriented forward. The results show that drive direction error corresponds to the loading difference on the casters. The data indicates that loading differences may cause asymmetric drag on the casters, which in turn causes unbalanced torque load on the motors. This leads to a difference in motor speed and drive direction error.

  6. Improving the Understanding of Psychological Factors Contributing to Horse-Related Accident and Injury: Context, Loss of Focus, Cognitive Errors and Rigidity

    PubMed Central

    DeAraugo, Jodi; McLaren, Suzanne; McManus, Phil; McGreevy, Paul D.

    2016-01-01

    Simple Summary There is a high risk of injury for people involved with horses in their work or recreational pursuits. High risks are particularly evident for racing employees and veterinarians. Elevated risks of injury may be associated with misjudging how to handle situations, reduced attention caused by distractions, taking a general view, and failing to consider other strategies that may reduce risks. To improve safety for humans and horses, it is important to identify safety strategies that are flexible, focused and specific. Abstract While the role of the horse in riding hazards is well recognised, little attention has been paid to the role of specific theoretical psychological processes of humans in contributing to and mitigating risk. The injury, mortality or compensation claim rates for participants in the horse-racing industry, veterinary medicine and equestrian disciplines provide compelling evidence for improving risk mitigation models. There is a paucity of theoretical principles regarding the risk of injury and mortality associated with human–horse interactions. In this paper we introduce and apply the four psychological principles of context, loss of focus, global cognitive style and the application of self as the frame of reference as a potential approach for assessing and managing human–horse risks. When these principles produce errors that are combined with a rigid self-referenced point, it becomes clear how rapidly risk emerges and how other people and animals may repeatedly become at risk over time. Here, with a focus on the thoroughbred racing industry, veterinary practice and equestrian disciplines, we review the merits of contextually applied strategies, an evolving reappraisal of risk, flexibility, and focused specifics of situations that may serve to modify human behaviour and mitigate risk. PMID:26891333

  7. How predictable is the behaviour of torrential processes: two case studies of the summer 2012

    NASA Astrophysics Data System (ADS)

    Huebl, Johannes; Eisl, Julia; Janu, Stefan; Hanspeter, Pussnig

    2013-04-01

    Debris flow hazards play an important role in the Austrian Alps since many villages are located on alluvial fans. Most of the mitigation Measures as well as Hazard Zone Maps are designed by engineers of previous generations, who know quite a lot about the torrential behaviour from their experience. But speaking in terms of recurrence intervals of 100 years or even more, human memory is restricted. On the other hand numerical modelling is a fast growing task in dealing with natural hazards. Scenarios of torrential hazards can be defined and accordant deposition pattern, flow depths and velocities are calculated. But of course, errors in the input data must lead to fatal errors in the results, consequently threaten human life in possible affected areas. Thus the need for data collection of exceptional events can help to reproduce the reality in a quite high grade, indeed, but unexpected events are still an issue and pose a challenge to engineers. In summer 2012 two debris flow events occurred in Austria with quite different behaviours, from triggering mechanism and flow behaviour through to deposition: Thunderstorms or long lasting rainfall, slope failures with subsequent channel blockage and dike breaching or linear erosion, one or more debris flows, one huge debris flow surge or a series of debris flow surges, sediments without clay or cohesive material, near channel deposition or outspread deposits. Both debris flows have been unexpected in their dimension, although mitigation measures and hazard maps exist. Both events were documented accurately, first to try to understand the torrential process occurred, second to identify the most fitting mitigation measures, ranging from permanent structures to temporary warning systems.

  8. SKA weak lensing - III. Added value of multiwavelength synergies for the mitigation of systematics

    NASA Astrophysics Data System (ADS)

    Camera, Stefano; Harrison, Ian; Bonaldi, Anna; Brown, Michael L.

    2017-02-01

    In this third paper of a series on radio weak lensing for cosmology with the Square Kilometre Array, we scrutinize synergies between cosmic shear measurements in the radio and optical/near-infrared (IR) bands for mitigating systematic effects. We focus on three main classes of systematics: (I) experimental systematic errors in the observed shear; (II) signal contamination by intrinsic alignments and (III) systematic effects due to an incorrect modelling of non-linear scales. First, we show that a comprehensive, multiwavelength analysis provides a self-calibration method for experimental systematic effects, only implying <50 per cent increment on the errors on cosmological parameters. We also illustrate how the cross-correlation between radio and optical/near-IR surveys alone is able to remove residual systematics with variance as large as 10-5, I.e. the same order of magnitude of the cosmological signal. This also opens the possibility of using such a cross-correlation as a means to detect unknown experimental systematics. Secondly, we demonstrate that, thanks to polarization information, radio weak lensing surveys will be able to mitigate contamination by intrinsic alignments, in a way similar but fully complementary to available self-calibration methods based on position-shear correlations. Lastly, we illustrate how radio weak lensing experiments, reaching higher redshifts than those accessible to optical surveys, will probe dark energy and the growth of cosmic structures in regimes less contaminated by non-linearities in the matter perturbations. For instance, the higher redshift bins of radio catalogues peak at z ≃ 0.8-1, whereas their optical/near-IR counterparts are limited to z ≲ 0.5-0.7. This translates into having a cosmological signal 2-5 times less contaminated by non-linear perturbations.

  9. Block matching and Wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; Rucci, Michael A.; Dapore, Alexander J.; Karch, Barry K.

    2017-07-01

    We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.

  10. Radiation effects in reconfigurable FPGAs

    NASA Astrophysics Data System (ADS)

    Quinn, Heather

    2017-04-01

    Field-programmable gate arrays (FPGAs) are co-processing hardware used in image and signal processing. FPGA are programmed with custom implementations of an algorithm. These algorithms are highly parallel hardware designs that are faster than software implementations. This flexibility and speed has made FPGAs attractive for many space programs that need in situ, high-speed signal processing for data categorization and data compression. Most commercial FPGAs are affected by the space radiation environment, though. Problems with TID has restricted the use of flash-based FPGAs. Static random access memory based FPGAs must be mitigated to suppress errors from single-event upsets. This paper provides a review of radiation effects issues in reconfigurable FPGAs and discusses methods for mitigating these problems. With careful design it is possible to use these components effectively and resiliently.

  11. Crosstalk mitigation using pilot assisted least square algorithm in OFDM-carrying orbital angular momentum multiplexed free-space-optical communication links.

    PubMed

    Sun, Tengfen; Liu, Minwen; Li, Yingchun; Wang, Min

    2017-10-16

    In this paper, we experimentally investigate the performance of crosstalk mitigation for 16-ary quadrature amplitude modulation orthogonal frequency division multiplexing (16QAM-OFDM) signals carrying orbital angular momentum (OAM) multiplexed free-space-optical communication (FSO) links using the pilot assisted Least Square (LS) algorithm. At the demodulating spatial light modulators (SLMs), we launch the distorted phase holograms which have the information of atmospheric turbulence obeying the modified Hill spectrum. And crosstalk can be introduced by these holograms with the experimental verification. The pilot assisted LS algorithm can efficiently improve the quality of system performance, the points of constellations get closer to the reference points and around two orders of magnitude improvement of bit-error rate (BER) is obtained.

  12. Dialysis Facility Safety: Processes and Opportunities.

    PubMed

    Garrick, Renee; Morey, Rishikesh

    2015-01-01

    Unintentional human errors are the source of most safety breaches in complex, high-risk environments. The environment of dialysis care is extremely complex. Dialysis patients have unique and changing physiology, and the processes required for their routine care involve numerous open-ended interfaces between providers and an assortment of technologically advanced equipment. Communication errors, both within the dialysis facility and during care transitions, and lapses in compliance with policies and procedures are frequent areas of safety risk. Some events, such as air emboli and needle dislodgments occur infrequently, but are serious risks. Other adverse events include medication errors, patient falls, catheter and access-related infections, access infiltrations and prolonged bleeding. A robust safety system should evaluate how multiple, sequential errors might align to cause harm. Systems of care can be improved by sharing the results of root cause analyses, and "good catches." Failure mode effects and analyses can be used to proactively identify and mitigate areas of highest risk, and methods drawn from cognitive psychology, simulation training, and human factor engineering can be used to advance facility safety. © 2015 Wiley Periodicals, Inc.

  13. Improving lidar turbulence estimates for wind energy

    NASA Astrophysics Data System (ADS)

    Newman, J. F.; Clifton, A.; Churchfield, M. J.; Klein, P.

    2016-09-01

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidars were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.

  14. Improving Lidar Turbulence Estimates for Wind Energy: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer; Clifton, Andrew; Churchfield, Matthew

    2016-10-01

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less

  15. Factors affecting the accuracy of near-infrared spectroscopy concentration calculations for focal changes in oxygenation parameters

    NASA Technical Reports Server (NTRS)

    Strangman, Gary; Franceschini, Maria Angela; Boas, David A.; Sutton, J. P. (Principal Investigator)

    2003-01-01

    Near-infrared spectroscopy (NIRS) can be used to noninvasively measure changes in the concentrations of oxy- and deoxyhemoglobin in tissue. We have previously shown that while global changes can be reliably measured, focal changes can produce erroneous estimates of concentration changes (NeuroImage 13 (2001), 76). Here, we describe four separate sources for systematic error in the calculation of focal hemoglobin changes from NIRS data and use experimental methods and Monte Carlo simulations to examine the importance and mitigation methods of each. The sources of error are: (1). the absolute magnitudes and relative differences in pathlength factors as a function of wavelength, (2). the location and spatial extent of the absorption change with respect to the optical probe, (3). possible differences in the spatial distribution of hemoglobin species, and (4). the potential for simultaneous monitoring of multiple regions of activation. We found wavelength selection and optode placement to be important variables in minimizing such errors, and our findings indicate that appropriate experimental procedures could reduce each of these errors to a small fraction (<10%) of the observed concentration changes.

  16. Improving Lidar Turbulence Estimates for Wind Energy

    DOE PAGES

    Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.; ...

    2016-10-03

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less

  17. Bayesian network models for error detection in radiotherapy plans

    NASA Astrophysics Data System (ADS)

    Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.

    2015-04-01

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.

  18. Experimental Validation of the Butyl-Rubber Finite Element (FE) Material Model for the Blast-Mitigating Floor Mat

    DTIC Science & Technology

    2015-08-01

    Analysis ( FEA ) results of each FE-material model, and the errors in each material model are discussed on various metrics. 15. SUBJECT TERMS ESEP... FEAs ...................................................................... 9 Fig. 8 Velocity histories on the loading table in FEAs for 4-millisecond...10 Fig. 9 Velocity histories on the loading table in FEAs for 8-msec-pulse loading ................... 10 Fig. 10 Velocity histories on

  19. Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1

    DTIC Science & Technology

    2016-03-01

    REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The

  20. Optimizing Processes to Minimize Risk

    NASA Technical Reports Server (NTRS)

    Loyd, David

    2017-01-01

    NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.

  1. (PRESENTED AT CMAS) THE NONLINEAR RESPONSE OF NITRATE REPLACEMENT THAT MITIGATES SULFATE REDUCTION: THE GAS RATION AS AN INDICTOR AND SENSITIVITY TO ERRORS IN TOTAL AMMONIA AND TOTAL NITRATE

    EPA Science Inventory

    The poster presents an assessment, using the CMAQ air quality model, showing the inorganic gas ratio (the ratio of free ammonia to total nitrate) can function as a screening indicator of the winter replacement of sulfate by nitrate when sulfate is reduced. It also presents an as...

  2. Integrating GPS with GLONASS for high-rate seismogeodesy

    NASA Astrophysics Data System (ADS)

    Geng, Jianghui; Jiang, Peng; Liu, Jingnan

    2017-04-01

    High-rate GPS is a precious seismogeodetic tool to capture coseismic displacements unambiguously and usually improved by sidereal filtering to mitigate multipath effects dominating the periods of tens of seconds to minutes. We further introduced GLONASS (Globalnaya navigatsionnaya sputnikovaya sistema) data into high-rate GPS to deliver over 2000 24 h displacements at 99 stations in Europe. We find that the major displacement errors induced by orbits and atmosphere on the low-frequency band that are not characterized by sidereal repeatabilities can be amplified markedly by up to 40% after GPS sidereal filtering. In contrast, integration with GLONASS can reduce the noise of high-rate GPS significantly and near uniformly over the entire frequency band, especially for the north components by up to 40%, suggesting that this integration is able to mitigate more errors than only multipath within high-rate GPS. Integrating GPS with GLONASS outperforms GPS sidereal filtering substantially in ameliorating displacement noise by up to 60% over a wide frequency band (e.g., 2 s-0.5 days) except a minor portion between 100 and 1000 s. High-rate multi-GNSS (Global Navigation Satellite System) can be enhanced further by sidereal filtering, which should however be carefully implemented to avoid adverse complications of the noise spectrum of displacements.

  3. One carbon cycle: Impacts of model integration, ecosystem process detail, model resolution, and initialization data, on projections of future climate mitigation strategies

    NASA Astrophysics Data System (ADS)

    Fisk, J.; Hurtt, G. C.; le page, Y.; Patel, P. L.; Chini, L. P.; Sahajpal, R.; Dubayah, R.; Thomson, A. M.; Edmonds, J.; Janetos, A. C.

    2013-12-01

    Integrated assessment models (IAMs) simulate the interactions between human and natural systems at a global scale, representing a broad suite of phenomena across the global economy, energy system, land-use, and carbon cycling. Most proposed climate mitigation strategies rely on maintaining or enhancing the terrestrial carbon sink as a substantial contribution to restrain the concentration of greenhouse gases in the atmosphere, however most IAMs rely on simplified regional representations of terrestrial carbon dynamics. Our research aims to reduce uncertainties associated with forest modeling within integrated assessments, and to quantify the impacts of climate change on forest growth and productivity for integrated assessments of terrestrial carbon management. We developed the new Integrated Ecosystem Demography (iED) to increase terrestrial ecosystem process detail, resolution, and the utilization of remote sensing in integrated assessments. iED brings together state-of-the-art models of human society (GCAM), spatial land-use patterns (GLM) and terrestrial ecosystems (ED) in a fully coupled framework. The major innovative feature of iED is a consistent, process-based representation of ecosystem dynamics and carbon cycle throughout the human, terrestrial, land-use, and atmospheric components. One of the most challenging aspects of ecosystem modeling is to provide accurate initialization of land surface conditions to reflect non-equilibrium conditions, i.e., the actual successional state of the forest. As all plants in ED have an explicit height, it is one of the few ecosystem models that can be initialized directly with vegetation height data. Previous work has demonstrated that ecosystem model resolution and initialization data quality have a large effect on flux predictions at continental scales. Here we use a factorial modeling experiment to quantify the impacts of model integration, process detail, model resolution, and initialization data on projections of future climate mitigation strategies. We find substantial effects on key integrated assessment projections including the magnitude of emissions to mitigate, the economic value of ecosystem carbon storage, future land-use patterns, food prices and energy technology.

  4. Improving efficiency and safety in external beam radiation therapy treatment delivery using a Kaizen approach.

    PubMed

    Kapur, Ajay; Adair, Nilda; O'Brien, Mildred; Naparstek, Nikoleta; Cangelosi, Thomas; Zuvic, Petrina; Joseph, Sherin; Meier, Jason; Bloom, Beatrice; Potters, Louis

    Modern external beam radiation therapy treatment delivery processes potentially increase the number of tasks to be performed by therapists and thus opportunities for errors, yet the need to treat a large number of patients daily requires a balanced allocation of time per treatment slot. The goal of this work was to streamline the underlying workflow in such time-interval constrained processes to enhance both execution efficiency and active safety surveillance using a Kaizen approach. A Kaizen project was initiated by mapping the workflow within each treatment slot for 3 Varian TrueBeam linear accelerators. More than 90 steps were identified, and average execution times for each were measured. The time-consuming steps were stratified into a 2 × 2 matrix arranged by potential workflow improvement versus the level of corrective effort required. A work plan was created to launch initiatives with high potential for workflow improvement but modest effort to implement. Time spent on safety surveillance and average durations of treatment slots were used to assess corresponding workflow improvements. Three initiatives were implemented to mitigate unnecessary therapist motion, overprocessing of data, and wait time for data transfer defects, respectively. A fourth initiative was implemented to make the division of labor by treating therapists as well as peer review more explicit. The average duration of treatment slots reduced by 6.7% in the 9 months following implementation of the initiatives (P = .001). A reduction of 21% in duration of treatment slots was observed on 1 of the machines (P < .001). Time spent on safety reviews remained the same (20% of the allocated interval), but the peer review component increased. The Kaizen approach has the potential to improve operational efficiency and safety with quick turnaround in radiation therapy practice by addressing non-value-adding steps characteristic of individual department workflows. Higher effort opportunities are identified to guide continual downstream quality improvements. Copyright © 2017 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  5. Time skewing and amplitude nonlinearity mitigation by feedback equalization for 56 Gbps VCSEL-based PAM-4 links

    NASA Astrophysics Data System (ADS)

    You, Yue; Zhang, Wenjia; Sun, Lin; Du, Jiangbing; Liang, Chenyu; Yang, Fan; He, Zuyuan

    2018-03-01

    The vertical cavity surface emitting laser (VCSEL)-based multimode optical transceivers enabled by pulse amplitude modulation (PAM)-4 will be commercialized in near future to meet the 400-Gbps standard short reach optical interconnects. It is still challenging to achieve over 56/112-Gbps with the multilevel signaling as the multimode property of the device and link would introduce the nonlinear temporal response for the different levels. In this work, we scrutinize the distortions that relates to the multilevel feature of PAM-4 modulation, and propose an effective feedback equalization scheme for 56-Gbps VCSEL-based PAM-4 optical interconnects system to mitigate the distortions caused by eye timing-skew and nonlinear power-dependent noise. Level redistribution at Tx side is theoretically modeled and constructed to achieve equivalent symbol error ratios (SERs) of four levels and improved BER performance. The cause of the eye skewing and the mitigation approach are also simulated at 100-Gbps and experimentally investigated at 56-Gbps. The results indicate more than 2-dB power penalty improvement has been achieved by using such a distortion aware equalizer.

  6. Turbulence mitigation scheme based on spatial diversity in orbital-angular-momentum multiplexed system

    NASA Astrophysics Data System (ADS)

    Zou, Li; Wang, Le; Zhao, Shengmei

    2017-10-01

    Atmospheric turbulence (AT) induced crosstalk can significantly impair the performance of free-space optical (FSO) communication link using orbital angular momentum (OAM) multiplexing. In this paper, we propose a spatial diversity (SD) turbulence mitigation scheme in an OAM-multiplexed FSO communication link. First, we present a SD mitigation model for the OAM-multiplexed FSO communication link under AT. Then we present a SD combining technique based on equal gain to enhance AT tolerance of the OAM-multiplexed FSO communication link. The numerical results show that performance of the OAM-multiplexed communication link has greatly improved by the proposed scheme. When the turbulence strength Cn2 is 5 × 10-15m - 2 / 3, the transmission distance is 1000 m and the channel signal-to-noise ratio (SNR) is 20 dB, the bit-error-rate (BER) performance of four spatial multiplexed OAM modes lm = + 1 , + 2 , + 3 , + 4 are 3 fold increase in comparison with those results without the proposed scheme. The proposed scheme is a promising direction for compensating the interference caused by AT in the OAM-multiplexed FSO communication link.

  7. Mitigation of laser damage on National Ignition Facility optics in volume production

    NASA Astrophysics Data System (ADS)

    Folta, James; Nostrand, Mike; Honig, John; Wong, Nan; Ravizza, Frank; Geraghty, Paul; Taranowski, Mike; Johnson, Gary; Larkin, Glenn; Ravizza, Doug; Peterson, John; Welday, Brian; Wegner, Paul

    2013-12-01

    The National Ignition Facility has recently achieved the milestone of delivering over 1.8 MJ and 500 TW of 351 nm laser energy and power on target, which required average fluences up to 9 J/cm2 (3 ns equivalent) in the final optics system. Commercial fused silica laser-grade UV optics typically have a maximum operating threshold of 5 J/cm2. We have developed an optics recycling process which enables NIF to operate above the laser damage initiation and growth thresholds. We previously reported a method to mitigate laser damage with laser ablation of the damage site to leave benign cone shaped pits. We have since developed a production facility with four mitigation systems capable of performing the mitigation protocols on full-sized (430 mm) optics in volume production. We have successfully repaired over 700 NIF optics (unique serial numbers), some of which have been recycled as many as 11 times. We describe the mitigation systems, the optics recycle loop process, and optics recycle production data.

  8. Dwell time method based on Richardson-Lucy algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Ma, Zhen

    2017-10-01

    When the noise in the surface error data given by the interferometer has no effect on the iterative convergence of the RL algorithm, the RL algorithm for deconvolution in image restoration can be applied to the CCOS model to solve the dwell time. By extending the initial error function on the edge and denoising the noise in the surface error data given by the interferometer , it makes the result more available . The simulation results show the final residual error 10.7912nm nm in PV and 0.4305 nm in RMS, when the initial surface error is 107.2414 nm in PV and 15.1331 nm in RMS. The convergence rates of the PV and RMS values can reach up to 89.9% and 96.0%, respectively . The algorithms can satisfy the requirement of fabrication very well.

  9. Sloan Digital Sky Survey III photometric quasar clustering: probing the initial conditions of the Universe

    NASA Astrophysics Data System (ADS)

    Ho, Shirley; Agarwal, Nishant; Myers, Adam D.; Lyons, Richard; Disbrow, Ashley; Seo, Hee-Jong; Ross, Ashley; Hirata, Christopher; Padmanabhan, Nikhil; O'Connell, Ross; Huff, Eric; Schlegel, David; Slosar, Anže; Weinberg, David; Strauss, Michael; Ross, Nicholas P.; Schneider, Donald P.; Bahcall, Neta; Brinkmann, J.; Palanque-Delabrouille, Nathalie; Yèche, Christophe

    2015-05-01

    The Sloan Digital Sky Survey has surveyed 14,555 square degrees of the sky, and delivered over a trillion pixels of imaging data. We present the large-scale clustering of 1.6 million quasars between z=0.5 and z=2.5 that have been classified from this imaging, representing the highest density of quasars ever studied for clustering measurements. This data set spans 0~ 11,00 square degrees and probes a volume of 80 h-3 Gpc3. In principle, such a large volume and medium density of tracers should facilitate high-precision cosmological constraints. We measure the angular clustering of photometrically classified quasars using an optimal quadratic estimator in four redshift slices with an accuracy of ~ 25% over a bin width of δl ~ 10-15 on scales corresponding to matter-radiation equality and larger (0l ~ 2-3). Observational systematics can strongly bias clustering measurements on large scales, which can mimic cosmologically relevant signals such as deviations from Gaussianity in the spectrum of primordial perturbations. We account for systematics by employing a new method recently proposed by Agarwal et al. (2014) to the clustering of photometrically classified quasars. We carefully apply our methodology to mitigate known observational systematics and further remove angular bins that are contaminated by unknown systematics. Combining quasar data with the photometric luminous red galaxy (LRG) sample of Ross et al. (2011) and Ho et al. (2012), and marginalizing over all bias and shot noise-like parameters, we obtain a constraint on local primordial non-Gaussianity of fNL = -113+154-154 (1σ error). We next assume that the bias of quasar and galaxy distributions can be obtained independently from quasar/galaxy-CMB lensing cross-correlation measurements (such as those in Sherwin et al. (2013)). This can be facilitated by spectroscopic observations of the sources, enabling the redshift distribution to be completely determined, and allowing precise estimates of the bias parameters. In this paper, if the bias and shot noise parameters are fixed to their known values (which we model by fixing them to their best-fit Gaussian values), we find that the error bar reduces to 1σ simeq 65. We expect this error bar to reduce further by at least another factor of five if the data is free of any observational systematics. We therefore emphasize that in order to make best use of large scale structure data we need an accurate modeling of known systematics, a method to mitigate unknown systematics, and additionally independent theoretical models or observations to probe the bias of dark matter halos.

  10. Tropospheric Correction for InSAR Using Interpolated ECMWF Data and GPS Zenith Total Delay

    NASA Technical Reports Server (NTRS)

    Webb, Frank H.; Fishbein, Evan F.; Moore, Angelyn W.; Owen, Susan E.; Fielding, Eric J.; Granger, Stephanie L.; Bjorndahl, Fredrik; Lofgren Johan

    2011-01-01

    To mitigate atmospheric errors caused by the troposphere, which is a limiting error source for spaceborne interferometric synthetic aperture radar (InSAR) imaging, a tropospheric correction method has been developed using data from the European Centre for Medium- Range Weather Forecasts (ECMWF) and the Global Positioning System (GPS). The ECMWF data was interpolated using a Stretched Boundary Layer Model (SBLM), and ground-based GPS estimates of the tropospheric delay from the Southern California Integrated GPS Network were interpolated using modified Gaussian and inverse distance weighted interpolations. The resulting Zenith Total Delay (ZTD) correction maps have been evaluated, both separately and using a combination of the two data sets, for three short-interval InSAR pairs from Envisat during 2006 on an area stretching from northeast from the Los Angeles basin towards Death Valley. Results show that the root mean square (rms) in the InSAR images was greatly reduced, meaning a significant reduction in the atmospheric noise of up to 32 percent. However, for some of the images, the rms increased and large errors remained after applying the tropospheric correction. The residuals showed a constant gradient over the area, suggesting that a remaining orbit error from Envisat was present. The orbit reprocessing in ROI_pac and the plane fitting both require that the only remaining error in the InSAR image be the orbit error. If this is not fulfilled, the correction can be made anyway, but it will be done using all remaining errors assuming them to be orbit errors. By correcting for tropospheric noise, the biggest error source is removed, and the orbit error becomes apparent and can be corrected for

  11. Final Environmental Assessment for the Military Housing Privatization Initiative (MHPI), Moody Air Force Base, Georgia

    DTIC Science & Technology

    2014-03-01

    the number of appropriate private sector housing units available to military families within 20 miles, or a GO-minute commute during peak driving...likely be accomplished by purchasing wetland mitigation credits at a USACE-approved mitigation bank in the service area where Moody AFB is located...authorized the Department of Defense (DoD) to engage private sector businesses through a process of housing privatization, wherein private sector housing

  12. Under-Body Blast Mitigation: Stand-Alone Seat Safety Activation System

    DTIC Science & Technology

    2014-04-01

    Restraints, Airbags or other protection systems) to mitigate injury to the occupant during the onset of the event. Injuries may occur as soon as 2.5 ms...cost-effective sensor to accurately and consistently deploy airbags and other pyrotechnic restraint systems based on accelerometers and other...Circuitry are critically important in order to activate the initiators of air bags and pyrotechnic restraint system such as airbags at the very first

  13. Turbine Aeration Design Software for Mitigating Adverse Environmental Impacts Resulting From Conventional Hydropower Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulliver, John S.

    2015-03-01

    Conventional hydropower turbine aeration test-bed for computational routines and software tools for improving environmental mitigation technologies for conventional hydropower systems. In achieving this goal, we have partnered with Alstom, a global leader in energy technology development and United States power generation, with additional funding from the Initiative for Renewable Energy and the Environment (IREE) and the College of Science and Engineering (CSE) at the UMN

  14. Combining Traditional Cyber Security Audit Data with Psychosocial Data: Towards Predictive Modeling for Insider Threat Mitigation

    NASA Astrophysics Data System (ADS)

    Greitzer, Frank L.; Frincke, Deborah A.

    The purpose of this chapter is to motivate the combination of traditional cyber security audit data with psychosocial data, to support a move from an insider threat detection stance to one that enables prediction of potential insider presence. Twodistinctiveaspects of the approach are the objectiveof predicting or anticipating potential risksandthe useoforganizational datain additiontocyber datato support the analysis. The chapter describes the challenges of this endeavor and reports on progressin definingausablesetof predictiveindicators,developingaframeworkfor integratingthe analysisoforganizationalandcyber securitydatatoyield predictions about possible insider exploits, and developing the knowledge base and reasoning capabilityof the system.We also outline the typesof errors that oneexpectsina predictive system versus a detection system and discuss how those errors can affect the usefulness of the results.

  15. An Approach to Addressing Selection Bias in Survival Analysis

    PubMed Central

    Carlin, Caroline S.; Solid, Craig A.

    2014-01-01

    This work proposes a frailty model that accounts for non-random treatment assignment in survival analysis. Using Monte Carlo simulation, we found that estimated treatment parameters from our proposed endogenous selection survival model (esSurv) closely parallel the consistent two-stage residual inclusion (2SRI) results, while offering computational and interpretive advantages. The esSurv method greatly enhances computational speed relative to 2SRI by eliminating the need for bootstrapped standard errors, and generally results in smaller standard errors than those estimated by 2SRI. In addition, esSurv explicitly estimates the correlation of unobservable factors contributing to both treatment assignment and the outcome of interest, providing an interpretive advantage over the residual parameter estimate in the 2SRI method. Comparisons with commonly used propensity score methods and with a model that does not account for non-random treatment assignment show clear bias in these methods that is not mitigated by increased sample size. We illustrate using actual dialysis patient data comparing mortality of patients with mature arteriovenous grafts for venous access to mortality of patients with grafts placed but not yet ready for use at the initiation of dialysis. We find strong evidence of endogeneity (with estimate of correlation in unobserved factors ρ̂ = 0.55), and estimate a mature-graft hazard ratio of 0.197 in our proposed method, with a similar 0.173 hazard ratio using 2SRI. The 0.630 hazard ratio from a frailty model without a correction for the non-random nature of treatment assignment illustrates the importance of accounting for endogeneity. PMID:24845211

  16. A new method for determining the optimal lagged ensemble

    PubMed Central

    DelSole, T.; Tippett, M. K.; Pegion, K.

    2017-01-01

    Abstract We propose a general methodology for determining the lagged ensemble that minimizes the mean square forecast error. The MSE of a lagged ensemble is shown to depend only on a quantity called the cross‐lead error covariance matrix, which can be estimated from a short hindcast data set and parameterized in terms of analytic functions of time. The resulting parameterization allows the skill of forecasts to be evaluated for an arbitrary ensemble size and initialization frequency. Remarkably, the parameterization also can estimate the MSE of a burst ensemble simply by taking the limit of an infinitely small interval between initialization times. This methodology is applied to forecasts of the Madden Julian Oscillation (MJO) from version 2 of the Climate Forecast System version 2 (CFSv2). For leads greater than a week, little improvement is found in the MJO forecast skill when ensembles larger than 5 days are used or initializations greater than 4 times per day. We find that if the initialization frequency is too infrequent, important structures of the lagged error covariance matrix are lost. Lastly, we demonstrate that the forecast error at leads ≥10 days can be reduced by optimally weighting the lagged ensemble members. The weights are shown to depend only on the cross‐lead error covariance matrix. While the methodology developed here is applied to CFSv2, the technique can be easily adapted to other forecast systems. PMID:28580050

  17. Effects of Heterogeneity and Uncertainties in Sources and Initial and Boundary Conditions on Spatiotemporal Variations of Groundwater Levels

    NASA Astrophysics Data System (ADS)

    Zhang, Y. K.; Liang, X.

    2014-12-01

    Effects of aquifer heterogeneity and uncertainties in source/sink, and initial and boundary conditions in a groundwater flow model on the spatiotemporal variations of groundwater level, h(x,t), were investigated. Analytical solutions for the variance and covariance of h(x, t) in an unconfined aquifer described by a linearized Boussinesq equation with a white noise source/sink and a random transmissivity field were derived. It was found that in a typical aquifer the error in h(x,t) in early time is mainly caused by the random initial condition and the error reduces as time goes to reach a constant error in later time. The duration during which the effect of the random initial condition is significant may last a few hundred days in most aquifers. The constant error in groundwater in later time is due to the combined effects of the uncertain source/sink and flux boundary: the closer to the flux boundary, the larger the error. The error caused by the uncertain head boundary is limited in a narrow zone near the boundary but it remains more or less constant over time. The effect of the heterogeneity is to increase the variation of groundwater level and the maximum effect occurs close to the constant head boundary because of the linear mean hydraulic gradient. The correlation of groundwater level decreases with temporal interval and spatial distance. In addition, the heterogeneity enhances the correlation of groundwater level, especially at larger time intervals and small spatial distances.

  18. The Second Victim Phenomenon After a Clinical Error: The Design and Evaluation of a Website to Reduce Caregivers' Emotional Responses After a Clinical Error.

    PubMed

    Mira, José Joaquín; Carrillo, Irene; Guilabert, Mercedes; Lorenzo, Susana; Pérez-Pérez, Pastora; Silvestre, Carmen; Ferrús, Lena

    2017-06-08

    Adverse events (incidents that harm a patient) can also produce emotional hardship for the professionals involved (second victims). Although a few international pioneering programs exist that aim to facilitate the recovery of the second victim, there are no known initiatives that aim to raise awareness in the professional community about this issue and prevent the situation from worsening. The aim of this study was to design and evaluate an online program directed at frontline hospital and primary care health professionals that raises awareness and provides information about the second victim phenomenon. The design of the Mitigating Impact in Second Victims (MISE) online program was based on a literature review, and its contents were selected by a group of 15 experts on patient safety with experience in both clinical and academic settings. The website hosting MISE was subjected to an accreditation process by an external quality agency that specializes in evaluating health websites. The MISE structure and content were evaluated by 26 patient safety managers at hospitals and within primary care in addition to 266 frontline health care professionals who followed the program, taking into account its comprehension, usefulness of the information, and general adequacy. Finally, the amount of knowledge gained from the program was assessed with three objective measures (pre- and posttest design). The website earned Advanced Accreditation for health websites after fulfilling required standards. The comprehension and practical value of the MISE content were positively assessed by 88% (23/26) and 92% (24/26) of patient safety managers, respectively. MISE was positively evaluated by health care professionals, who awarded it 8.8 points out of a maximum 10. Users who finished MISE improved their knowledge on patient safety terminology, prevalence and impact of adverse events and clinical errors, second victim support models, and recommended actions following a severe adverse event (P<.001). The MISE program differs from existing intervention initiatives by its preventive nature in relation to the second victim phenomenon. Its online nature makes it an easily accessible tool for the professional community. This program has shown to increase user's knowledge on this issue and it helps them correct their approach. Furthermore, it is one of the first initiatives to attempt to bring the second victim phenomenon closer to primary care. ©José Joaquín Mira, Irene Carrillo, Mercedes Guilabert, Susana Lorenzo, Pastora Pérez-Pérez, Carmen Silvestre, Lena Ferrús, Spanish Second Victim Research Team. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 08.06.2017.

  19. Dealing with the white death: avalanche risk management for traffic routes.

    PubMed

    Rheinberger, Christoph M; Bründl, Michael; Rhyner, Jakob

    2009-01-01

    This article discusses mitigation strategies to protect traffic routes from snow avalanches. Up to now, mitigation of snow avalanches on many roads and railways in the Alps has relied on avalanche sheds, which require large initial investments resulting in high opportunity costs. Therefore, avalanche risk managers have increasingly adopted organizational mitigation measures such as warning systems and closure policies instead. The effectiveness of these measures is, however, greatly dependent on human decisions. In this article, we present a method for optimizing avalanche mitigation for traffic routes in terms of both their risk reduction impact and their net benefit to society. First, we introduce a generic framework for assessing avalanche risk and for quantifying the impact of mitigation. This allows for sound cost-benefit comparisons between alternative mitigation strategies. Second, we illustrate the framework with a case study from Switzerland. Our findings suggest that site-specific characteristics of avalanche paths, as well as the economic importance of a traffic route, are decisive for the choice of optimal mitigation strategies. On routes endangered by few avalanche paths with frequent avalanche occurrences, structural measures are most efficient, whereas reliance on organizational mitigation is often the most appropriate strategy on routes endangered by many paths with infrequent or fuzzy avalanche risk. Finally, keeping a traffic route open may be very important for tourism or the transport industry. Hence, local economic value may promote the use of a hybrid strategy that combines organizational and structural measures to optimize the resource allocation of avalanche risk mitigation.

  20. HIV-1 subtype C unproductively infects human cardiomyocytes in vitro and induces apoptosis mitigated by an anti-Gp120 aptamer.

    PubMed

    Lopes de Campos, Walter R; Chirwa, Nthato; London, Grace; Rotherham, Lia S; Morris, Lynn; Mayosi, Bongani M; Khati, Makobetsa

    2014-01-01

    HIV-associated cardiomyopathy (HIVCM) is of clinical concern in developing countries because of a high HIV-1 prevalence, especially subtype C, and limited access to highly active antiretroviral therapy (HAART). For these reasons, we investigated the direct and indirect effects of HIV-1 subtype C infection of cultured human cardiomyocytes and the mechanisms leading to cardiomyocytes damage; as well as a way to mitigate the damage. We evaluated a novel approach to mitigate HIVCM using a previously reported gp120 binding and HIV-1 neutralizing aptamer called UCLA1. We established a cell-based model of HIVCM by infecting human cardiomyocytes with cell-free HIV-1 or co-culturing human cardiomyocytes with HIV-infected monocyte derived macrophages (MDM). We discovered that HIV-1 subtype C unproductively (i.e. its life cycle is arrested after reverse transcription) infects cardiomyocytes. Furthermore, we found that HIV-1 initiates apoptosis of cardiomyocytes through caspase-9 activation, preferentially via the intrinsic or mitochondrial initiated pathway. CXCR4 receptor-using viruses were stronger inducers of apoptosis than CCR5 utilizing variants. Importantly, we discovered that HIV-1 induced apoptosis of cardiomyocytes was mitigated by UCLA1. However, UCLA1 had no protective effective on cardiomyocytes when apoptosis was triggered by HIV-infected MDM. When HIV-1 was treated with UCLA1 prior to infection of MDM, it failed to induce apoptosis of cardiomyocytes. These data suggest that HIV-1 causes a mitochondrial initiated apoptotic cascade, which signal through caspase-9, whereas HIV-1 infected MDM causes apoptosis predominantly via the death-receptor pathway, mediated by caspase-8. Furthermore the data suggest that UCLA1 protects cardiomyocytes from caspase-mediated apoptosis, directly by binding to HIV-1 and indirectly by preventing infection of MDM.

  1. The impact on air quality of energy saving measures in the major cities signatories of the Covenant of Mayors initiative.

    PubMed

    Monforti-Ferrario, Fabio; Kona, Albana; Peduzzi, Emanuela; Pernigotti, Denise; Pisoni, Enrico

    2018-06-08

    This study is a first attempt to evaluate how the major efforts made by several European cities in the frame of the Covenant of Mayors (CoM) initiative can impact the air pollution levels in the participating cities. CoM is by no mean one of the major cities initiatives aimed at mitigating climate change, supporting local authorities in the implementation of their climate action plans. Energy savings measures reported in the CoM cities' action plans have been analysed from the air quality perspective in order to find quantitative relations in the way local authorities deal with mitigation and how these practices are expected to have consequences on the air quality at urban level and finally positively impacting the citizens' health. In the paper, the air quality 2713 energy saving measures proposed by 146 cities located in 23 countries in the frame of the CoM are selected and their co-benefits for air quality and public health estimated by means of SHERPA, a fast modelling tool that mimics the behaviour of a full physically-based Chemical Transport Model. Besides evaluating the overall benefits of this subset of mitigation measures for the air quality, the study also investigates the relevance of some factors such as the implementation sector, the city size and the pollution levels in achieving the highest possible co-benefits. The results presented refer to the special field covered by the study, i.e. energy saving measures and are not automatically referable to other types of measures. Nevertheless, they clearly show how climate mitigation and air quality policies are deeply interconnected at the urban level. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Hungry Horse Dam Fisheries Mitigation, 1992-1993 Progress Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DosSantos, Joe; Vashro, Jim; Lockard, Larry

    1994-06-01

    In February of 1900, over forty agency representatives and interested citizens began development of the 1991 Mitigation Plan. This effort culminated in the 1993 Implementation Plan for mitigation of fish losses attributable to the construction and operation of Hungry Horse Dam. The primary purpose of this biennial report is to inform the public of the status of ongoing mitigation activities resulting from those planning efforts. A habitat improvement project is underway to benefit bull trout in Big Creek in the North Fork drainage of the Flathead River and work is planned in Hay Creek, another North Fork tributary. Bull troutmore » redd counts have been expanded and experimental programs involving genetic evaluation, outmigrant monitoring, and hatchery studies have been initiated, Cutthroat mitigation efforts have focused on habitat improvements in Elliott Creek and Taylor`s Outflow and improvements have been followed by imprint plants of hatchery fish and/or eyed eggs in those streams. Rogers Lake west of Kalispell and Lion Lake, near Hungry Horse, were chemically rehabilitated. Cool and warm water fish habitat has been improved in Halfmoon Lake and Echo Lake. Public education and public interest is important to the future success of mitigation activities. As part of the mitigation team`s public awareness responsibility we have worked with numerous volunteer groups, public agencies, and private landowners to stimulate interest and awareness of mitigation activities and the aquatic ecosystem. The purpose of this biennial report is to foster public awareness of, and support for, mitigation activities as we move forward in implementing the Hungry Horse Dam Fisheries Mitigation Implementation Plan.« less

  3. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  4. A hybrid method for synthetic aperture ladar phase-error compensation

    NASA Astrophysics Data System (ADS)

    Hua, Zhili; Li, Hongping; Gu, Yongjian

    2009-07-01

    As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.

  5. Time-symmetric integration in astrophysics

    NASA Astrophysics Data System (ADS)

    Hernandez, David M.; Bertschinger, Edmund

    2018-04-01

    Calculating the long-term solution of ordinary differential equations, such as those of the N-body problem, is central to understanding a wide range of dynamics in astrophysics, from galaxy formation to planetary chaos. Because generally no analytic solution exists to these equations, researchers rely on numerical methods that are prone to various errors. In an effort to mitigate these errors, powerful symplectic integrators have been employed. But symplectic integrators can be severely limited because they are not compatible with adaptive stepping and thus they have difficulty in accommodating changing time and length scales. A promising alternative is time-reversible integration, which can handle adaptive time-stepping, but the errors due to time-reversible integration in astrophysics are less understood. The goal of this work is to study analytically and numerically the errors caused by time-reversible integration, with and without adaptive stepping. We derive the modified differential equations of these integrators to perform the error analysis. As an example, we consider the trapezoidal rule, a reversible non-symplectic integrator, and show that it gives secular energy error increase for a pendulum problem and for a Hénon-Heiles orbit. We conclude that using reversible integration does not guarantee good energy conservation and that, when possible, use of symplectic integrators is favoured. We also show that time-symmetry and time-reversibility are properties that are distinct for an integrator.

  6. Error Sources in Proccessing LIDAR Based Bridge Inspection

    NASA Astrophysics Data System (ADS)

    Bian, H.; Chen, S. E.; Liu, W.

    2017-09-01

    Bridge inspection is a critical task in infrastructure management and is facing unprecedented challenges after a series of bridge failures. The prevailing visual inspection was insufficient in providing reliable and quantitative bridge information although a systematic quality management framework was built to ensure visual bridge inspection data quality to minimize errors during the inspection process. The LiDAR based remote sensing is recommended as an effective tool in overcoming some of the disadvantages of visual inspection. In order to evaluate the potential of applying this technology in bridge inspection, some of the error sources in LiDAR based bridge inspection are analysed. The scanning angle variance in field data collection and the different algorithm design in scanning data processing are the found factors that will introduce errors into inspection results. Besides studying the errors sources, advanced considerations should be placed on improving the inspection data quality, and statistical analysis might be employed to evaluate inspection operation process that contains a series of uncertain factors in the future. Overall, the development of a reliable bridge inspection system requires not only the improvement of data processing algorithms, but also systematic considerations to mitigate possible errors in the entire inspection workflow. If LiDAR or some other technology can be accepted as a supplement for visual inspection, the current quality management framework will be modified or redesigned, and this would be as urgent as the refine of inspection techniques.

  7. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning

    PubMed Central

    Deng, Zhongliang

    2018-01-01

    Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization. PMID:29361718

  8. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning.

    PubMed

    Deng, Zhongliang; Fu, Xiao; Wang, Hanhua

    2018-01-20

    Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.

  9. Symbolic Analysis of Concurrent Programs with Polymorphism

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam

    2010-01-01

    The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.

  10. Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, E. M. C.; Reu, P. L.

    “Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less

  11. Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves

    DOE PAGES

    Jones, E. M. C.; Reu, P. L.

    2017-11-28

    “Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less

  12. Governance and networks for health co-benefits of climate change mitigation: Lessons from two Indian cities.

    PubMed

    Puppim de Oliveira, Jose A; Doll, Christopher N H

    2016-12-01

    Health has been the main driver for many urban environmental interventions, particularly in cases of significant health problems linked to poor urban environmental conditions. This paper examines empirically the links between climate change mitigation and health in urban areas, when health is the main driver for improvements. The paper aims to understand how systems of urban governance can enable or prevent the creation of health outcomes via continuous improvements in the environmental conditions in a city. The research draws on cases from two Indian cities where initiatives were undertaken in different sectors: Surat (waste) and Delhi (transportation). Using the literature on network effectiveness as an analytical framework, the paper compares the cases to identify the possible ways to strengthen the governance and policy making process in the urban system so that each intervention can intentionally realize multiple impacts for both local health and climate change mitigation in the long term as well as factors that may pose a threat to long-term progress and revert back to the previous situation after initial achievements. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Constellations of Next Generation Gravity Missions: Simulations regarding optimal orbits and mitigation of aliasing errors

    NASA Astrophysics Data System (ADS)

    Hauk, M.; Pail, R.; Gruber, T.; Purkhauser, A.

    2017-12-01

    The CHAMP and GRACE missions have demonstrated the tremendous potential for observing mass changes in the Earth system from space. In order to fulfil future user needs a monitoring of mass distribution and mass transport with higher spatial and temporal resolution is required. This can be achieved by a Bender-type Next Generation Gravity Mission (NGGM) consisting of a constellation of satellite pairs flying in (near-)polar and inclined orbits, respectively. For these satellite pairs the observation concept of the GRACE Follow-on mission with a laser-based low-low satellite-to-satellite tracking (ll-SST) system and more precise accelerometers and state-of-the-art star trackers is adopted. By choosing optimal orbit constellations for these satellite pairs high frequency mass variations will be observable and temporal aliasing errors from under-sampling will not be the limiting factor anymore. As part of the European Space Agency (ESA) study "ADDCON" (ADDitional CONstellation and Scientific Analysis Studies of the Next Generation Gravity Mission) a variety of mission design parameters for such constellations are investigated by full numerical simulations. These simulations aim at investigating the impact of several orbit design choices and at the mitigation of aliasing errors in the gravity field retrieval by co-parametrization for various constellations of Bender-type NGGMs. Choices for orbit design parameters such as altitude profiles during mission lifetime, length of retrieval period, value of sub-cycles and choice of prograde versus retrograde orbits are investigated as well. Results of these simulations are presented and optimal constellations for NGGM's are identified. Finally, a short outlook towards new geophysical applications like a near real time service for hydrology is given.

  14. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    PubMed

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.

  15. Distance learning education for mitigation/adaptation policy: a case study

    NASA Astrophysics Data System (ADS)

    Slini, T.; Giama, E.; Papadopoulou, Ch.-O.

    2016-02-01

    The efficient training of young environmental scientists has proven to be a challenging goal over the last years, while several dynamic initiatives have been developed aiming to provide complete and consistent education. A successful example is the e-learning course for participants mainly coming from emerging economy countries 'Development of mitigation/adaptation policy portfolios' organised in the frame of the project Promitheas4: Knowledge transfer and research needs for preparing mitigation/adaptation policy portfolios, aiming to provide knowledge transfer, enhance new skills and competencies, using modern didactic approaches and learning technologies. The present paper addresses the experience and the results of these actions, which seem promising and encouraging and were broadly welcomed by the participants.

  16. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy.

    PubMed

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-09-18

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms.

  17. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy

    PubMed Central

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-01-01

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms. PMID:26393606

  18. Afocal optical flow sensor for reducing vertical height sensitivity in indoor robot localization and navigation.

    PubMed

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan

    2015-05-13

    This paper introduces a novel afocal optical flow sensor (OFS) system for odometry estimation in indoor robotic navigation. The OFS used in computer optical mouse has been adopted for mobile robots because it is not affected by wheel slippage. Vertical height variance is thought to be a dominant factor in systematic error when estimating moving distances in mobile robots driving on uneven surfaces. We propose an approach to mitigate this error by using an afocal (infinite effective focal length) system. We conducted experiments in a linear guide on carpet and three other materials with varying sensor heights from 30 to 50 mm and a moving distance of 80 cm. The same experiments were repeated 10 times. For the proposed afocal OFS module, a 1 mm change in sensor height induces a 0.1% systematic error; for comparison, the error for a conventional fixed-focal-length OFS module is 14.7%. Finally, the proposed afocal OFS module was installed on a mobile robot and tested 10 times on a carpet for distances of 1 m. The average distance estimation error and standard deviation are 0.02% and 17.6%, respectively, whereas those for a conventional OFS module are 4.09% and 25.7%, respectively.

  19. A Quality Improvement Project to Decrease Human Milk Errors in the NICU.

    PubMed

    Oza-Frank, Reena; Kachoria, Rashmi; Dail, James; Green, Jasmine; Walls, Krista; McClead, Richard E

    2017-02-01

    Ensuring safe human milk in the NICU is a complex process with many potential points for error, of which one of the most serious is administration of the wrong milk to the wrong infant. Our objective was to describe a quality improvement initiative that was associated with a reduction in human milk administration errors identified over a 6-year period in a typical, large NICU setting. We employed a quasi-experimental time series quality improvement initiative by using tools from the model for improvement, Six Sigma methodology, and evidence-based interventions. Scanned errors were identified from the human milk barcode medication administration system. Scanned errors of interest were wrong-milk-to-wrong-infant, expired-milk, or preparation errors. The scanned error rate and the impact of additional improvement interventions from 2009 to 2015 were monitored by using statistical process control charts. From 2009 to 2015, the total number of errors scanned declined from 97.1 per 1000 bottles to 10.8. Specifically, the number of expired milk error scans declined from 84.0 per 1000 bottles to 8.9. The number of preparation errors (4.8 per 1000 bottles to 2.2) and wrong-milk-to-wrong-infant errors scanned (8.3 per 1000 bottles to 2.0) also declined. By reducing the number of errors scanned, the number of opportunities for errors also decreased. Interventions that likely had the greatest impact on reducing the number of scanned errors included installation of bedside (versus centralized) scanners and dedicated staff to handle milk. Copyright © 2017 by the American Academy of Pediatrics.

  20. Advanced Mitigation Process (AMP) for Improving Laser Damage Threshold of Fused Silica Optics

    NASA Astrophysics Data System (ADS)

    Ye, Xin; Huang, Jin; Liu, Hongjie; Geng, Feng; Sun, Laixi; Jiang, Xiaodong; Wu, Weidong; Qiao, Liang; Zu, Xiaotao; Zheng, Wanguo

    2016-08-01

    The laser damage precursors in subsurface of fused silica (e.g. photosensitive impurities, scratches and redeposited silica compounds) were mitigated by mineral acid leaching and HF etching with multi-frequency ultrasonic agitation, respectively. The comparison of scratches morphology after static etching and high-frequency ultrasonic agitation etching was devoted in our case. And comparison of laser induce damage resistance of scratched and non-scratched fused silica surfaces after HF etching with high-frequency ultrasonic agitation were also investigated in this study. The global laser induce damage resistance was increased significantly after the laser damage precursors were mitigated in this case. The redeposition of reaction produce was avoided by involving multi-frequency ultrasonic and chemical leaching process. These methods made the increase of laser damage threshold more stable. In addition, there is no scratch related damage initiations found on the samples which were treated by Advanced Mitigation Process.

  1. Advanced Mitigation Process (AMP) for Improving Laser Damage Threshold of Fused Silica Optics

    PubMed Central

    Ye, Xin; Huang, Jin; Liu, Hongjie; Geng, Feng; Sun, Laixi; Jiang, Xiaodong; Wu, Weidong; Qiao, Liang; Zu, Xiaotao; Zheng, Wanguo

    2016-01-01

    The laser damage precursors in subsurface of fused silica (e.g. photosensitive impurities, scratches and redeposited silica compounds) were mitigated by mineral acid leaching and HF etching with multi-frequency ultrasonic agitation, respectively. The comparison of scratches morphology after static etching and high-frequency ultrasonic agitation etching was devoted in our case. And comparison of laser induce damage resistance of scratched and non-scratched fused silica surfaces after HF etching with high-frequency ultrasonic agitation were also investigated in this study. The global laser induce damage resistance was increased significantly after the laser damage precursors were mitigated in this case. The redeposition of reaction produce was avoided by involving multi-frequency ultrasonic and chemical leaching process. These methods made the increase of laser damage threshold more stable. In addition, there is no scratch related damage initiations found on the samples which were treated by Advanced Mitigation Process. PMID:27484188

  2. Validation of a RANS transition model using a high-order weighted compact nonlinear scheme

    NASA Astrophysics Data System (ADS)

    Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang

    2013-04-01

    A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.

  3. Precise calibration of spatial phase response nonuniformity arising in liquid crystal on silicon.

    PubMed

    Xu, Jingquan; Qin, SiYi; Liu, Chen; Fu, Songnian; Liu, Deming

    2018-06-15

    In order to calibrate the spatial phase response nonuniformity of liquid crystal on silicon (LCoS), we propose to use a Twyman-Green interferometer to characterize the wavefront distortion, due to the inherent curvature of the device. During the characterization, both the residual carrier frequency introduced by the Fourier transform evaluation method and the lens aberration are error sources. For the tilted phase error introduced by residual carrier frequency, the least mean square fitting method is used to obtain the tilted phase error. Meanwhile, we use Zernike polynomials fitting based on plane mirror calibration to mitigate the lens aberration. For a typical LCoS with 1×12,288 pixels after calibration, the peak-to-valley value of the inherent wavefront distortion is approximately 0.25λ at 1550 nm, leading to a half-suppression of wavefront distortion. All efforts can suppress the root mean squares value of the inherent wavefront distortion to approximately λ/34.

  4. Spatial-temporal-covariance-based modeling, analysis, and simulation of aero-optics wavefront aberrations.

    PubMed

    Vogel, Curtis R; Tyler, Glenn A; Wittich, Donald J

    2014-07-01

    We introduce a framework for modeling, analysis, and simulation of aero-optics wavefront aberrations that is based on spatial-temporal covariance matrices extracted from wavefront sensor measurements. Within this framework, we present a quasi-homogeneous structure function to analyze nonhomogeneous, mildly anisotropic spatial random processes, and we use this structure function to show that phase aberrations arising in aero-optics are, for an important range of operating parameters, locally Kolmogorov. This strongly suggests that the d5/3 power law for adaptive optics (AO) deformable mirror fitting error, where d denotes actuator separation, holds for certain important aero-optics scenarios. This framework also allows us to compute bounds on AO servo lag error and predictive control error. In addition, it provides us with the means to accurately simulate AO systems for the mitigation of aero-effects, and it may provide insight into underlying physical processes associated with turbulent flow. The techniques introduced here are demonstrated using data obtained from the Airborne Aero-Optics Laboratory.

  5. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    PubMed

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  6. Error Analysis in a Stereo Vision-Based Pedestrian Detection Sensor for Collision Avoidance Applications

    PubMed Central

    Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323

  7. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  8. ENSO Predictions in an Intermediate Coupled Model Influenced by Removing Initial Condition Errors in Sensitive Areas: A Target Observation Perspective

    NASA Astrophysics Data System (ADS)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-07-01

    Previous studies indicate that ENSO predictions are particularly sensitive to the initial conditions in some key areas (socalled "sensitive areas"). And yet, few studies have quantified improvements in prediction skill in the context of an optimal observing system. In this study, the impact on prediction skill is explored using an intermediate coupled model in which errors in initial conditions formed to make ENSO predictions are removed in certain areas. Based on ideal observing system simulation experiments, the importance of various observational networks on improvement of El Niño prediction skill is examined. The results indicate that the initial states in the central and eastern equatorial Pacific are important to improve El Ni˜no prediction skill effectively. When removing the initial condition errors in the central equatorial Pacific, ENSO prediction errors can be reduced by 25%. Furthermore, combinations of various subregions are considered to demonstrate the efficiency on ENSO prediction skill. Particularly, seasonally varying observational networks are suggested to improve the prediction skill more effectively. For example, in addition to observing in the central equatorial Pacific and its north throughout the year, increasing observations in the eastern equatorial Pacific during April to October is crucially important, which can improve the prediction accuracy by 62%. These results also demonstrate the effectiveness of the conditional nonlinear optimal perturbation approach on detecting sensitive areas for target observations.

  9. Effects of model error on control of large flexible space antenna with comparisons of decoupled and linear quadratic regulator control procedures

    NASA Technical Reports Server (NTRS)

    Hamer, H. A.; Johnson, K. G.

    1986-01-01

    An analysis was performed to determine the effects of model error on the control of a large flexible space antenna. Control was achieved by employing two three-axis control-moment gyros (CMG's) located on the antenna column. State variables were estimated by including an observer in the control loop that used attitude and attitude-rate sensors on the column. Errors were assumed to exist in the individual model parameters: modal frequency, modal damping, mode slope (control-influence coefficients), and moment of inertia. Their effects on control-system performance were analyzed either for (1) nulling initial disturbances in the rigid-body modes, or (2) nulling initial disturbances in the first three flexible modes. The study includes the effects on stability, time to null, and control requirements (defined as maximum torque and total momentum), as well as on the accuracy of obtaining initial estimates of the disturbances. The effects on the transients of the undisturbed modes are also included. The results, which are compared for decoupled and linear quadratic regulator (LQR) control procedures, are shown in tabular form, parametric plots, and as sample time histories of modal-amplitude and control responses. Results of the analysis showed that the effects of model errors on the control-system performance were generally comparable for both control procedures. The effect of mode-slope error was the most serious of all model errors.

  10. Stratospheric wind errors, initial states and forecast skill in the GLAS general circulation model

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J.

    1983-01-01

    Relations between stratospheric wind errors, initial states and 500 mb skill are investigated using the GLAS general circulation model initialized with FGGE data. Erroneous stratospheric winds are seen in all current general circulation models, appearing also as weak shear above the subtropical jet and as cold polar stratospheres. In this study it is shown that the more anticyclonic large-scale flows are correlated with large forecast stratospheric winds. In addition, it is found that for North America the resulting errors are correlated with initial state jet stream accelerations while for East Asia the forecast winds are correlated with initial state jet strength. Using 500 mb skill scores over Europe at day 5 to measure forecast performance, it is found that both poor forecast skill and excessive stratospheric winds are correlated with more anticyclonic large-scale flows over North America. It is hypothesized that the resulting erroneous kinetic energy contributes to the poor forecast skill, and that the problem is caused by a failure in the modeling of the stratospheric energy cycle in current general circulation models independent of vertical resolution.

  11. Smart-Divert Powered Descent Guidance to Avoid the Backshell Landing Dispersion Ellipse

    NASA Technical Reports Server (NTRS)

    Carson, John M.; Acikmese, Behcet

    2013-01-01

    A smart-divert capability has been added into the Powered Descent Guidance (PDG) software originally developed for Mars pinpoint and precision landing. The smart-divert algorithm accounts for the landing dispersions of the entry backshell, which separates from the lander vehicle at the end of the parachute descent phase and prior to powered descent. The smart-divert PDG algorithm utilizes the onboard fuel and vehicle thrust vectoring to mitigate landing error in an intelligent way: ensuring that the lander touches down with minimum- fuel usage at the minimum distance from the desired landing location that also avoids impact by the descending backshell. The smart-divert PDG software implements a computationally efficient, convex formulation of the powered-descent guidance problem to provide pinpoint or precision-landing guidance solutions that are fuel-optimal and satisfy physical thrust bound and pointing constraints, as well as position and speed constraints. The initial smart-divert implementation enforced a lateral-divert corridor parallel to the ground velocity vector; this was based on guidance requirements for MSL (Mars Science Laboratory) landings. This initial method was overly conservative since the divert corridor was infinite in the down-range direction despite the backshell landing inside a calculable dispersion ellipse. Basing the divert constraint instead on a local tangent to the backshell dispersion ellipse in the direction of the desired landing site provides a far less conservative constraint. The resulting enhanced smart-divert PDG algorithm avoids impact with the descending backshell and has reduced conservatism.

  12. A Track Initiation Method for the Underwater Target Tracking Environment

    NASA Astrophysics Data System (ADS)

    Li, Dong-dong; Lin, Yang; Zhang, Yao

    2018-04-01

    A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.

  13. Improved Sensor Fault Detection, Isolation, and Mitigation Using Multiple Observers Approach

    PubMed Central

    Wang, Zheng; Anand, D. M.; Moyne, J.; Tilbury, D. M.

    2017-01-01

    Traditional Fault Detection and Isolation (FDI) methods analyze a residual signal to detect and isolate sensor faults. The residual signal is the difference between the sensor measurements and the estimated outputs of the system based on an observer. The traditional residual-based FDI methods, however, have some limitations. First, they require that the observer has reached its steady state. In addition, residual-based methods may not detect some sensor faults, such as faults on critical sensors that result in an unobservable system. Furthermore, the system may be in jeopardy if actions required for mitigating the impact of the faulty sensors are not taken before the faulty sensors are identified. The contribution of this paper is to propose three new methods to address these limitations. Faults that occur during the observers' transient state can be detected by analyzing the convergence rate of the estimation error. Open-loop observers, which do not rely on sensor information, are used to detect faults on critical sensors. By switching among different observers, we can potentially mitigate the impact of the faulty sensor during the FDI process. These three methods are systematically integrated with a previously developed residual-based method to provide an improved FDI and mitigation capability framework. The overall approach is validated mathematically, and the effectiveness of the overall approach is demonstrated through simulation on a 5-state suspension system. PMID:28924303

  14. Heat Islands

    EPA Pesticide Factsheets

    EPA's Heat Island Effect Site provides information on heat islands, their impacts, mitigation strategies, related research, a directory of heat island reduction initiatives in U.S. communities, and EPA's Heat Island Reduction Program.

  15. Authenticating concealed private data while maintaining concealment

    DOEpatents

    Thomas, Edward V [Albuquerque, NM; Draelos, Timothy J [Albuquerque, NM

    2007-06-26

    A method of and system for authenticating concealed and statistically varying multi-dimensional data comprising: acquiring an initial measurement of an item, wherein the initial measurement is subject to measurement error; applying a transformation to the initial measurement to generate reference template data; acquiring a subsequent measurement of an item, wherein the subsequent measurement is subject to measurement error; applying the transformation to the subsequent measurement; and calculating a Euclidean distance metric between the transformed measurements; wherein the calculated Euclidean distance metric is identical to a Euclidean distance metric between the measurement prior to transformation.

  16. Sensitivity of Forecast Skill to Different Objective Analysis Schemes

    NASA Technical Reports Server (NTRS)

    Baker, W. E.

    1979-01-01

    Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.

  17. Initializing a Mesoscale Boundary-Layer Model with Radiosonde Observations

    NASA Astrophysics Data System (ADS)

    Berri, Guillermo J.; Bertossa, Germán

    2018-01-01

    A mesoscale boundary-layer model is used to simulate low-level regional wind fields over the La Plata River of South America, a region characterized by a strong daily cycle of land-river surface-temperature contrast and low-level circulations of sea-land breeze type. The initial and boundary conditions are defined from a limited number of local observations and the upper boundary condition is taken from the only radiosonde observations available in the region. The study considers 14 different upper boundary conditions defined from the radiosonde data at standard levels, significant levels, level of the inversion base and interpolated levels at fixed heights, all of them within the first 1500 m. The period of analysis is 1994-2008 during which eight daily observations from 13 weather stations of the region are used to validate the 24-h surface-wind forecast. The model errors are defined as the root-mean-square of relative error in wind-direction frequency distribution and mean wind speed per wind sector. Wind-direction errors are greater than wind-speed errors and show significant dispersion among the different upper boundary conditions, not present in wind speed, revealing a sensitivity to the initialization method. The wind-direction errors show a well-defined daily cycle, not evident in wind speed, with the minimum at noon and the maximum at dusk, but no systematic deterioration with time. The errors grow with the height of the upper boundary condition level, in particular wind direction, and double the errors obtained when the upper boundary condition is defined from the lower levels. The conclusion is that defining the model upper boundary condition from radiosonde data closer to the ground minimizes the low-level wind-field errors throughout the region.

  18. The Protective Action Encoding of Serotonin Transients in the Human Brain.

    PubMed

    Moran, Rosalyn J; Kishida, Kenneth T; Lohrenz, Terry; Saez, Ignacio; Laxton, Adrian W; Witcher, Mark R; Tatter, Stephen B; Ellis, Thomas L; Phillips, Paul Em; Dayan, Peter; Montague, P Read

    2018-05-01

    The role of serotonin in human brain function remains elusive due, at least in part, to our inability to measure rapidly the local concentration of this neurotransmitter. We used fast-scan cyclic voltammetry to infer serotonergic signaling from the striatum of 14 brains of human patients with Parkinson's disease. Here we report these novel measurements and show that they correlate with outcomes and decisions in a sequential investment game. We find that serotonergic concentrations transiently increase as a whole following negative reward prediction errors, while reversing when counterfactual losses predominate. This provides initial evidence that the serotonergic system acts as an opponent to dopamine signaling, as anticipated by theoretical models. Serotonin transients on one trial were also associated with actions on the next trial in a manner that correlated with decreased exposure to poor outcomes. Thus, the fluctuations observed for serotonin appear to correlate with the inhibition of over-reactions and promote persistence of ongoing strategies in the face of short-term environmental changes. Together these findings elucidate a role for serotonin in the striatum, suggesting it encodes a protective action strategy that mitigates risk and modulates choice selection particularly following negative environmental events.

  19. Postural adjustment errors during lateral step initiation in older and younger adults

    PubMed Central

    Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.

    2016-01-01

    The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25595953

  20. Postural adjustment errors during lateral step initiation in older and younger adults

    PubMed Central

    Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.

    2014-01-01

    The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25183162

  1. REIMR: A Process for Utilizing Propulsion-Oriented 'Lessons-Learned' to Mitigate Development Risk

    NASA Technical Reports Server (NTRS)

    Ballard, Richard O.; Brown, Kendall K.

    2005-01-01

    This paper is a summary overview of a study conducted a t the NASA Marshall Space Flight Center (MSFC) during the initial phases of the Space Launch Initiative (SLI) program to evaluate a large number of technical problems associated with the design, development, test, evaluation and operation of several major liquid propellant rocket engine systems (i.e., SSME, Fastrac, J-2, F-1). The results of this study was the identification of the "Fundamental Root Causes" that enabled the technical problems to manifest, and practices that can be implemented to prevent them from recurring in future engine development efforts. This paper will discus the Fundamental Root Causes, cite some examples of how the technical problems arose from them, and provide a discussion of how they can be mitigated or avoided.

  2. Modeling Infrared Signal Reflections to Characterize Indoor Multipath Propagation

    PubMed Central

    De-La-Llana-Calvo, Álvaro; Lázaro-Galilea, José Luis; Gardel-Vicente, Alfredo; Rodríguez-Navarro, David; Bravo-Muñoz, Ignacio; Tsirigotis, Georgios; Iglesias-Miguel, Juan

    2017-01-01

    In this paper, we propose a model to characterize Infrared (IR) signal reflections on any kind of surface material, together with a simplified procedure to compute the model parameters. The model works within the framework of Local Positioning Systems (LPS) based on IR signals (IR-LPS) to evaluate the behavior of transmitted signal Multipaths (MP), which are the main cause of error in IR-LPS, and makes several contributions to mitigation methods. Current methods are based on physics, optics, geometry and empirical methods, but these do not meet our requirements because of the need to apply several different restrictions and employ complex tools. We propose a simplified model based on only two reflection components, together with a method for determining the model parameters based on 12 empirical measurements that are easily performed in the real environment where the IR-LPS is being applied. Our experimental results show that the model provides a comprehensive solution to the real behavior of IR MP, yielding small errors when comparing real and modeled data (the mean error ranges from 1% to 4% depending on the environment surface materials). Other state-of-the-art methods yielded mean errors ranging from 15% to 40% in test measurements. PMID:28406436

  3. Endogenous mitigation of H2S inside of the landfills.

    PubMed

    Fang, Yuan; Zhong, Zhong; Shen, Dongsheng; Du, Yao; Xu, Jing; Long, Yuyang

    2016-02-01

    Vast quantities of hydrogen sulfide (H2S) emitted from landfill sites require urgent disposal. The current study focused on source control and examined the migration and conversion behavior of sulfur compounds in two lab-scale simulated landfills with different operation modes. It aimed to explore the possible strategies and mechanisms for H2S endogenous mitigation inside of landfills during decomposition. It was found that the strength of H2S emissions from the landfill sites was dependent on the municipal solid waste (MSW) degradation speed and vertical distribution of sulfide. Leachate recirculation can shorten both the H2S influence period and pollution risk to the surrounding environment. H2S endogenous mitigation may be achieved by chemical oxidation, biological oxidation, adsorption, and/or precipitation in different stages. Migration and conversion mainly affected H2S release behavior during the initial stabilization phase in the landfill. Microbial activities related to sulfur, nitrogen, and iron can further promote H2S endogenous mitigation during the high reducing phase. Thus, H2S endogenous mitigation can be effectively enhanced via control of the aforementioned processes.

  4. Early benefits of mitigation in risk of regional climate extremes

    NASA Astrophysics Data System (ADS)

    Ciavarella, Andrew; Stott, Peter; Lowe, Jason

    2017-04-01

    Large differences in climate outcomes are projected by the end of this century depending on whether greenhouse gas emissions continue to increase or are reduced sufficiently to limit total warming to below 2 °C (ref. ). However, it is generally thought that benefits of mitigation are hidden by internal climate variability until later in the century. Here we show that if the likelihood of extremely hot seasons is considered, the benefits of mitigation emerge more quickly than previously thought. It takes less than 20 years of emissions reductions in many regions for the likelihood of extreme seasonal warmth to reduce by more than half following initiation of mitigation. Additionally we show that the latest possible date at which the probability of extreme seasonal temperatures will be halved through emissions reductions consistent with the 2 °C target is in the 2040s. Exposure to climate risk is therefore reduced markedly and rapidly with substantial reductions of greenhouse gas emissions, demonstrating that the early mitigation needed to limit eventual warming below potentially dangerous levels benefits societies in the nearer term not just in the longer-term future.

  5. Transmission of 2 × 56 Gb/s PAM-4 signal over 100 km SSMF using 18 GHz DMLs.

    PubMed

    Zhou, Shiwei; Li, Xiang; Yi, Lilin; Yang, Qi; Fu, Songnian

    2016-04-15

    We experimentally demonstrate C-band 2 × 56 Gb/s pulse-amplitude modulation (PAM)-4 signal transmission over 100 km standard single-mode fiber (SSMF) using 18 GHz direct-modulated lasers (DMLs) and direct detection, without inline optical amplifier. A delay interferometer (DI) at the transmitter side is used to extend the transmission reach from 40 to 100 km. A digital Volterra filter at the receiver side is used to mitigate the nonlinear distortions. We obtain an average bit error ratio (BER) of 1.5 × 10(-3) for 2 × 56 Gb/s PAM-4 signal after 100 km SSMF transmission at the optimal input power, which is below the 7% forward error correction (FEC) threshold (3.8 × 10(-3)).

  6. Application of a reflective microscope objective for multiphoton microscopy.

    PubMed

    Kabir, Mohammad M; Choubal, Aakash M; Toussaint, Kimani C

    2018-04-20

    Reflective objectives (ROs) mitigate chromatic aberration across a broad wavelength range. Yet, a systematic performance characterisation of ROs has not been done. In this paper, we compare the performance of a 0.5 numerical-aperture (NA) reflective objective (RO) with a 0.55 NA standard glass objective (SO), using two-photon fluorescence (TPF) and second-harmonic generation (SHG). For experiments spanning ∼1 octave in the visible and NIR wavelengths, the SO leads to defocusing errors of 25-40% for TPF images of subdiffraction fluorescent beads and 10-12% for SHG images of collagen fibres. The corresponding error for the RO is ∼4% for both imaging modalities. This work emphasises the potential utility of ROs for multimodal multiphoton microscopy applications. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  7. Lessons from Crew Resource Management for Cardiac Surgeons.

    PubMed

    Marvil, Patrick; Tribble, Curt

    2017-04-30

    Crew resource management (CRM) describes a system developed in the late 1970s in response to a series of deadly commercial aviation crashes. This system has been universally adopted in commercial and military aviation and is now an integral part of aviation culture. CRM is an error mitigation strategy developed to reduce human error in situations in which teams operate in complex, high-stakes environments. Over time, the principles of this system have been applied and utilized in other environments, particularly in medical areas dealing with high-stakes outcomes requiring optimal teamwork and communication. While the data from formal studies on the effectiveness of formal CRM training in medical environments have reported mixed results, it seems clear that some of these principles should have value in the practice of cardiovascular surgery.

  8. Technology-based Interventions for Preventing and Treating Substance Use Among Youth

    PubMed Central

    Marsch, Lisa A.; Borodovsky, Jacob T.

    2017-01-01

    Summary Preventing or mitigating substance use among youth generally involves three different intervention frameworks: universal prevention, selective prevention, and treatment. Each of these levels of intervention poses unique therapeutic and implementation challenges. Technology-based interventions provide solutions to many of these problems by delivering evidence-based interventions in a consistent and cost-effective manner. This article summarizes the current state of the science of technology-based interventions for preventing substance use initiation and mitigating substance use and associated consequences among youth. PMID:27613350

  9. Characterization of Sleep During Military Operations and the Impact of Deployment-Related Factors: The Behavioral Health Needs Assessment Survey

    DTIC Science & Technology

    2014-09-11

    reported getting insufficient sleep , and this interacted with mission type. Sleep disruption was associated with number of deployments, as well as...number of months in a combat zone. Further, those with more sleep difficulty and disruption were more likely to have caused an accident or error that...if any, deployment-related factors might mitigate the myriad negative consequences of disturbed sleep . OBJECTIVES AND HYPOTHESES Previous reports

  10. Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.

    2011-01-01

    Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.

  11. Simulation and mitigation of higher-order ionospheric errors in PPP

    NASA Astrophysics Data System (ADS)

    Zus, Florian; Deng, Zhiguo; Wickert, Jens

    2017-04-01

    We developed a rapid and precise algorithm to compute ionospheric phase advances in a realistic electron density field. The electron density field is derived from a plasmaspheric extension of the International Reference Ionosphere (Gulyaeva and Bilitza, 2012) and the magnetic field stems from the International Geomagnetic Reference Field. For specific station locations, elevation and azimuth angles the ionospheric phase advances are stored in a look-up table. The higher-order ionospheric residuals are computed by forming the standard linear combination of the ionospheric phase advances. In a simulation study we examine how the higher-order ionospheric residuals leak into estimated station coordinates, clocks, zenith delays and tropospheric gradients in precise point positioning. The simulation study includes a few hundred globally distributed stations and covers the time period 1990-2015. We take a close look on the estimated zenith delays and tropospheric gradients as they are considered a data source for meteorological and climate related research. We also show how the by product of this simulation study, the look-up tables, can be used to mitigate higher-order ionospheric errors in practise. Gulyaeva, T.L., and Bilitza, D. Towards ISO Standard Earth Ionosphere and Plasmasphere Model. In: New Developments in the Standard Model, edited by R.J. Larsen, pp. 1-39, NOVA, Hauppauge, New York, 2012, available at https://www.novapublishers.com/catalog/product_info.php?products_id=35812

  12. Exploring cosmic origins with CORE: Mitigation of systematic effects

    NASA Astrophysics Data System (ADS)

    Natoli, P.; Ashdown, M.; Banerji, R.; Borrill, J.; Buzzelli, A.; de Gasperis, G.; Delabrouille, J.; Hivon, E.; Molinari, D.; Patanchon, G.; Polastri, L.; Tomasi, M.; Bouchet, F. R.; Henrot-Versillé, S.; Hoang, D. T.; Keskitalo, R.; Kiiveri, K.; Kisner, T.; Lindholm, V.; McCarthy, D.; Piacentini, F.; Perdereau, O.; Polenta, G.; Tristram, M.; Achucarro, A.; Ade, P.; Allison, R.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Bartlett, J.; Bartolo, N.; Basak, S.; Baumann, D.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Boulanger, F.; Brinckmann, T.; Bucher, M.; Burigana, C.; Cai, Z.-Y.; Calvo, M.; Carvalho, C.-S.; Castellano, M. G.; Challinor, A.; Chluba, J.; Clesse, S.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; de Bernardis, P.; De Zotti, G.; Di Valentino, E.; Diego, J.-M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Finelli, F.; Forastieri, F.; Galli, S.; Genova-Santos, R.; Gerbino, M.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Gruppuso, A.; Hagstotz, S.; Hanany, S.; Handley, W.; Hernandez-Monteagudo, C.; Hervías-Caimapo, C.; Hills, M.; Keihänen, E.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lattanzi, M.; Lesgourgues, J.; Lewis, A.; Liguori, M.; López-Caniego, M.; Luzzi, G.; Maffei, B.; Mandolesi, N.; Martinez-González, E.; Martins, C. J. A. P.; Masi, S.; Matarrese, S.; Melchiorri, A.; Melin, J.-B.; Migliaccio, M.; Monfardini, A.; Negrello, M.; Notari, A.; Pagano, L.; Paiella, A.; Paoletti, D.; Piat, M.; Pisano, G.; Pollo, A.; Poulin, V.; Quartin, M.; Remazeilles, M.; Roman, M.; Rossi, G.; Rubino-Martin, J.-A.; Salvati, L.; Signorelli, G.; Tartari, A.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Valiviita, J.; Van de Weijgaert, R.; van Tent, B.; Vennin, V.; Vielva, P.; Vittorio, N.; Wallis, C.; Young, K.; Zannoni, M.

    2018-04-01

    We present an analysis of the main systematic effects that could impact the measurement of CMB polarization with the proposed CORE space mission. We employ timeline-to-map simulations to verify that the CORE instrumental set-up and scanning strategy allow us to measure sky polarization to a level of accuracy adequate to the mission science goals. We also show how the CORE observations can be processed to mitigate the level of contamination by potentially worrying systematics, including intensity-to-polarization leakage due to bandpass mismatch, asymmetric main beams, pointing errors and correlated noise. We use analysis techniques that are well validated on data from current missions such as Planck to demonstrate how the residual contamination of the measurements by these effects can be brought to a level low enough not to hamper the scientific capability of the mission, nor significantly increase the overall error budget. We also present a prototype of the CORE photometric calibration pipeline, based on that used for Planck, and discuss its robustness to systematics, showing how CORE can achieve its calibration requirements. While a fine-grained assessment of the impact of systematics requires a level of knowledge of the system that can only be achieved in a future study phase, the analysis presented here strongly suggests that the main areas of concern for the CORE mission can be addressed using existing knowledge, techniques and algorithms.

  13. Is the deleterious effect of cryotherapy on proprioception mitigated by exercise?

    PubMed

    Ribeiro, F; Moreira, S; Neto, J; Oliveira, J

    2013-05-01

    This study aimed to examine the acute effects of cryotherapy on knee position sense and to determine the time period necessary to normalize joint position sense when exercising after cryotherapy. 12 subjects visited the laboratory twice, once for cryotherapy followed by 30 min of exercise on a cycloergometer and once for cryotherapy followed by 30 min of rest. Sessions were randomly determined and separated by 48 h. Cryotherapy was applied in the form of ice bag, filled with 1 kg of crushed ice, for 20 min. Knee position sense was measured at baseline, after cryotherapy and every 5 min after cryotherapy removal until a total of 30 min. The main effect of cryotherapy was significant showing an increase in absolute (F7,154=43.76, p<0.001) and relative (F7,154=7.97, p<0.001) errors after cryotherapy. The intervention after cryotherapy (rest vs. exercise) revealed a significant main effect only for absolute error (F7,154=4.05, p<0.001), i.e., when subjects exercised after cryotherapy, the proprioceptive acuity reached the baseline values faster (10 min vs. 15 min). Our results indicated that the deleterious effect of cryotherapy on proprioception is mitigated by low intensity exercise, being the time necessary to normalize knee position sense reduced from 15 to 10 min. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  15. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  16. MUSIC: MUlti-Scale Initial Conditions

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  17. Medication safety initiative in reducing medication errors.

    PubMed

    Nguyen, Elisa E; Connolly, Phyllis M; Wong, Vivian

    2010-01-01

    The purpose of the study was to evaluate whether a Medication Pass Time Out initiative was effective and sustainable in reducing medication administration errors. A retrospective descriptive method was used for this research, where a structured Medication Pass Time Out program was implemented following staff and physician education. As a result, the rate of interruptions during the medication administration process decreased from 81% to 0. From the observations at baseline, 6 months, and 1 year after implementation, the percent of doses of medication administered without interruption improved from 81% to 99%. Medication doses administered without errors at baseline, 6 months, and 1 year improved from 98% to 100%.

  18. Attitude guidance and tracking for spacecraft with two reaction wheels

    NASA Astrophysics Data System (ADS)

    Biggs, James D.; Bai, Yuliang; Henninger, Helen

    2018-04-01

    This paper addresses the guidance and tracking problem for a rigid-spacecraft using two reaction wheels (RWs). The guidance problem is formulated as an optimal control problem on the special orthogonal group SO(3). The optimal motion is solved analytically as a function of time and is used to reduce the original guidance problem to one of computing the minimum of a nonlinear function. A tracking control using two RWs is developed that extends previous singular quaternion stabilisation controls to tracking controls on the rotation group. The controller is proved to locally asymptotically track the generated reference motions using Lyapunov's direct method. Simulations of a 3U CubeSat demonstrate that this tracking control is robust to initial rotation errors and angular velocity errors in the controlled axis. For initial angular velocity errors in the uncontrolled axis and under significant disturbances the control fails to track. However, the singular tracking control is combined with a nano-magnetic torquer which simply damps the angular velocity in the uncontrolled axis and is shown to provide a practical control method for tracking in the presence of disturbances and initial condition errors.

  19. Workshops Increase Students' Proficiency at Identifying General and APA-Style Writing Errors

    ERIC Educational Resources Information Center

    Jorgensen, Terrence D.; Marek, Pam

    2013-01-01

    To determine the effectiveness of 20- to 30-min workshops on recognition of errors in American Psychological Association-style writing, 58 introductory psychology students attended one of the three workshops (on grammar, mechanics, or references) and completed error recognition tests (pretest, initial posttest, and three follow-up tests). As a…

  20. Effect of gyro verticality error on lateral autoland tracking performance for an inertially smoothed control law

    NASA Technical Reports Server (NTRS)

    Thibodeaux, J. J.

    1977-01-01

    The results of a simulation study performed to determine the effects of gyro verticality error on lateral autoland tracking and landing performance are presented. A first order vertical gyro error model was used to generate the measurement of the roll attitude feedback signal normally supplied by an inertial navigation system. The lateral autoland law used was an inertially smoothed control design. The effect of initial angular gyro tilt errors (2 deg, 3 deg, 4 deg, and 5 deg), introduced prior to localizer capture, were investigated by use of a small perturbation aircraft simulation. These errors represent the deviations which could occur in the conventional attitude sensor as a result of the maneuver-induced spin-axis misalinement and drift. Results showed that for a 1.05 deg per minute erection rate and a 5 deg initial tilt error, ON COURSE autoland control logic was not satisfied. Failure to attain the ON COURSE mode precluded high control loop gains and localizer beam path integration and resulted in unacceptable beam standoff at touchdown.

  1. Effects of errors and gaps in spatial data sets on assessment of conservation progress.

    PubMed

    Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C

    2013-10-01

    Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.

  2. Fault-tolerance thresholds for the surface code with fabrication errors

    NASA Astrophysics Data System (ADS)

    Auger, James M.; Anwar, Hussain; Gimeno-Segovia, Mercedes; Stace, Thomas M.; Browne, Dan E.

    2017-10-01

    The construction of topological error correction codes requires the ability to fabricate a lattice of physical qubits embedded on a manifold with a nontrivial topology such that the quantum information is encoded in the global degrees of freedom (i.e., the topology) of the manifold. However, the manufacturing of large-scale topological devices will undoubtedly suffer from fabrication errors—permanent faulty components such as missing physical qubits or failed entangling gates—introducing permanent defects into the topology of the lattice and hence significantly reducing the distance of the code and the quality of the encoded logical qubits. In this work we investigate how fabrication errors affect the performance of topological codes, using the surface code as the test bed. A known approach to mitigate defective lattices involves the use of primitive swap gates in a long sequence of syndrome extraction circuits. Instead, we show that in the presence of fabrication errors the syndrome can be determined using the supercheck operator approach and the outcome of the defective gauge stabilizer generators without any additional computational overhead or use of swap gates. We report numerical fault-tolerance thresholds in the presence of both qubit fabrication and gate fabrication errors using a circuit-based noise model and the minimum-weight perfect-matching decoder. Our numerical analysis is most applicable to two-dimensional chip-based technologies, but the techniques presented here can be readily extended to other topological architectures. We find that in the presence of 8 % qubit fabrication errors, the surface code can still tolerate a computational error rate of up to 0.1 % .

  3. Analysis of RFI Statistics for Aquarius RFI Detection and Mitigation Improvements

    NASA Technical Reports Server (NTRS)

    de Matthaeis, Paolo; Soldo, Yan; Le Vine, David M.

    2016-01-01

    Aquarius is an L-band active/passive sensor designed to globally map sea surface salinity from space. Two instruments, a radar scatterometer and a radiometer, observe the same surface footprint almost simultaneously. The radiometer is the primary instrument for sensing sea surface salinity (SSS), while the scatterometer is included to provide a correction for sea surface roughness, which is a primary source of error in the salinity retrieval. Although the primary objective is the measurement of SSS, the instrument combination operates continuously, acquiring data over land and sea ice as well. An important feature of the data processing includes detection and mitigation of Radio Frequency Interference (RFI) which is done separately for both active and passive instruments. Correcting for RFI is particularly critical over ocean because of the high accuracy required in the brightness temperature measurements for SSS retrieval. It is also necessary for applications of the Aquarius data over land, where man-made interference is widespread, even though less accuracy is required in this case. This paper will provide an overview of the current status of the Aquarius RFI processing and an update on the ongoing work on the improvement of the RFI detection and mitigation performance.

  4. Initiatives to address vehicle compatibility

    DOT National Transportation Integrated Search

    2003-06-01

    National Highway Traffic Safety Administration (NHTSA) is the national and international leader in collecting and analyzing motor vehicle crash data, and in developing countermeasures relevant to preventing and mitigating vehicle crashes, thereby red...

  5. Very short-term reactive forecasting of the solar ultraviolet index using an extreme learning machine integrated with the solar zenith angle.

    PubMed

    Deo, Ravinesh C; Downs, Nathan; Parisi, Alfio V; Adamowski, Jan F; Quilty, John M

    2017-05-01

    Exposure to erythemally-effective solar ultraviolet radiation (UVR) that contributes to malignant keratinocyte cancers and associated health-risk is best mitigated through innovative decision-support systems, with global solar UV index (UVI) forecast necessary to inform real-time sun-protection behaviour recommendations. It follows that the UVI forecasting models are useful tools for such decision-making. In this study, a model for computationally-efficient data-driven forecasting of diffuse and global very short-term reactive (VSTR) (10-min lead-time) UVI, enhanced by drawing on the solar zenith angle (θ s ) data, was developed using an extreme learning machine (ELM) algorithm. An ELM algorithm typically serves to address complex and ill-defined forecasting problems. UV spectroradiometer situated in Toowoomba, Australia measured daily cycles (0500-1700h) of UVI over the austral summer period. After trialling activations functions based on sine, hard limit, logarithmic and tangent sigmoid and triangular and radial basis networks for best results, an optimal ELM architecture utilising logarithmic sigmoid equation in hidden layer, with lagged combinations of θ s as the predictor data was developed. ELM's performance was evaluated using statistical metrics: correlation coefficient (r), Willmott's Index (WI), Nash-Sutcliffe efficiency coefficient (E NS ), root mean square error (RMSE), and mean absolute error (MAE) between observed and forecasted UVI. Using these metrics, the ELM model's performance was compared to that of existing methods: multivariate adaptive regression spline (MARS), M5 Model Tree, and a semi-empirical (Pro6UV) clear sky model. Based on RMSE and MAE values, the ELM model (0.255, 0.346, respectively) outperformed the MARS (0.310, 0.438) and M5 Model Tree (0.346, 0.466) models. Concurring with these metrics, the Willmott's Index for the ELM, MARS and M5 Model Tree models were 0.966, 0.942 and 0.934, respectively. About 57% of the ELM model's absolute errors were small in magnitude (±0.25), whereas the MARS and M5 Model Tree models generated 53% and 48% of such errors, respectively, indicating the latter models' errors to be distributed in larger magnitude error range. In terms of peak global UVI forecasting, with half the level of error, the ELM model outperformed MARS and M5 Model Tree. A comparison of the magnitude of hourly-cumulated errors of 10-min lead time forecasts for diffuse and global UVI highlighted ELM model's greater accuracy compared to MARS, M5 Model Tree or Pro6UV models. This confirmed the versatility of an ELM model drawing on θ s data for VSTR forecasting of UVI at near real-time horizon. When applied to the goal of enhancing expert systems, ELM-based accurate forecasts capable of reacting quickly to measured conditions can enhance real-time exposure advice for the public, mitigating the potential for solar UV-exposure-related disease. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  6. Some practical problems in implementing randomization.

    PubMed

    Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet

    2010-06-01

    While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.

  7. Preventing medical errors by designing benign failures.

    PubMed

    Grout, John R

    2003-07-01

    One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.

  8. A global perspective of the limits of prediction skill based on the ECMWF ensemble

    NASA Astrophysics Data System (ADS)

    Zagar, Nedjeljka

    2016-04-01

    In this talk presents a new model of the global forecast error growth applied to the forecast errors simulated by the ensemble prediction system (ENS) of the ECMWF. The proxy for forecast errors is the total spread of the ECMWF operational ensemble forecasts obtained by the decomposition of the wind and geopotential fields in the normal-mode functions. In this way, the ensemble spread can be quantified separately for the balanced and inertio-gravity (IG) modes for every forecast range. Ensemble reliability is defined for the balanced and IG modes comparing the ensemble spread with the control analysis in each scale. The results show that initial uncertainties in the ECMWF ENS are largest in the tropical large-scale modes and their spatial distribution is similar to the distribution of the short-range forecast errors. Initially the ensemble spread grows most in the smallest scales and in the synoptic range of the IG modes but the overall growth is dominated by the increase of spread in balanced modes in synoptic and planetary scales in the midlatitudes. During the forecasts, the distribution of spread in the balanced and IG modes grows towards the climatological spread distribution characteristic of the analyses. The ENS system is found to be somewhat under-dispersive which is associated with the lack of tropical variability, primarily the Kelvin waves. The new model of the forecast error growth has three fitting parameters to parameterize the initial fast growth and a more slow exponential error growth later on. The asymptotic values of forecast errors are independent of the exponential growth rate. It is found that the asymptotic values of the errors due to unbalanced dynamics are around 10 days while the balanced and total errors saturate in 3 to 4 weeks. Reference: Žagar, N., R. Buizza, and J. Tribbia, 2015: A three-dimensional multivariate modal analysis of atmospheric predictability with application to the ECMWF ensemble. J. Atmos. Sci., 72, 4423-4444.

  9. Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast

    NASA Astrophysics Data System (ADS)

    Chu, Tianli; Xiong, Zixiang

    2003-12-01

    This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Jeff, E-mail: jmeye3@utsouthwestern.ed; Bluett, Jaques; Amos, Richard

    Purpose: Conventional proton therapy with passively scattered beams is used to treat a number of tumor sites, including prostate cancer. Spot scanning proton therapy is a treatment delivery means that improves conformal coverage of the clinical target volume (CTV). Placement of individual spots within a target is dependent on traversed tissue density. Errors in patient alignment perturb dose distributions. Moreover, there is a need for a rational planning approach that can mitigate the dosimetric effect of random alignment errors. We propose a treatment planning approach and then analyze the consequences of various simulated alignment errors on prostate treatments. Methods andmore » Materials: Ten control patients with localized prostate cancer underwent treatment planning for spot scanning proton therapy. After delineation of the clinical target volume, a scanning target volume (STV) was created to guide dose coverage. Errors in patient alignment in two axes (rotational and yaw) as well as translational errors in the anteroposterior direction were then simulated, and dose to the CTV and normal tissues were reanalyzed. Results: Coverage of the CTV remained high even in the setting of extreme rotational and yaw misalignments. Changes in the rectum and bladder V45 and V70 were similarly minimal, except in the case of translational errors, where, as a result of opposed lateral beam arrangements, much larger dosimetric perturbations were observed. Conclusions: The concept of the STV as applied to spot scanning radiation therapy and as presented in this report leads to robust coverage of the CTV even in the setting of extreme patient misalignments.« less

  11. Frogs Exploit Statistical Regularities in Noisy Acoustic Scenes to Solve Cocktail-Party-like Problems.

    PubMed

    Lee, Norman; Ward, Jessica L; Vélez, Alejandro; Micheyl, Christophe; Bee, Mark A

    2017-03-06

    Noise is a ubiquitous source of errors in all forms of communication [1]. Noise-induced errors in speech communication, for example, make it difficult for humans to converse in noisy social settings, a challenge aptly named the "cocktail party problem" [2]. Many nonhuman animals also communicate acoustically in noisy social groups and thus face biologically analogous problems [3]. However, we know little about how the perceptual systems of receivers are evolutionarily adapted to avoid the costs of noise-induced errors in communication. In this study of Cope's gray treefrog (Hyla chrysoscelis; Hylidae), we investigated whether receivers exploit a potential statistical regularity present in noisy acoustic scenes to reduce errors in signal recognition and discrimination. We developed an anatomical/physiological model of the peripheral auditory system to show that temporal correlation in amplitude fluctuations across the frequency spectrum ("comodulation") [4-6] is a feature of the noise generated by large breeding choruses of sexually advertising males. In four psychophysical experiments, we investigated whether females exploit comodulation in background noise to mitigate noise-induced errors in evolutionarily critical mate-choice decisions. Subjects experienced fewer errors in recognizing conspecific calls and in selecting the calls of high-quality mates in the presence of simulated chorus noise that was comodulated. These data show unequivocally, and for the first time, that exploiting statistical regularities present in noisy acoustic scenes is an important biological strategy for solving cocktail-party-like problems in nonhuman animal communication. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. "Bad Luck Mutations": DNA Mutations Are not the Whole Answer to Understanding Cancer Risk.

    PubMed

    Trosko, James E; Carruba, Giuseppe

    2017-01-01

    It has been proposed that many human cancers are generated by intrinsic mechanisms that produce "Bad Luck" mutations by the proliferation of organ-specific adult stem cells. There have been serious challenges to this interpretation, including multiple extrinsic factors thought to be correlated with mutations found in cancers associated with these exposures. While support for both interpretations provides some validity, both interpretations ignore several concepts of the multistage, multimechanism process of carcinogenesis, namely, (1) mutations can be generated by both "errors of DNA repair" and "errors of DNA replication," during the "initiation" process of carcinogenesis; (2) "initiated" stem cells must be clonally amplified by nonmutagenic, intrinsic or extrinsic epigenetic mechanisms; (3) organ-specific stem cell numbers can be modified during in utero development, thereby altering the risk to cancer later in life; and (4) epigenetic tumor promoters are characterized by species, individual genetic-, gender-, developmental state-specificities, and threshold levels to be active; sustained and long-term exposures; and exposures in the absence of antioxidant "antipromoters." Because of the inevitability of some of the stem cells generating "initiating" mutations by either "errors of DNA repair" or "errors of DNA replication," a tumor is formed depending on the promotion phase of carcinogenesis. While it is possible to reduce our frequencies of mutagenic "initiated" cells, one can never reduce it to zero. Because of the extended period of the promotion phase of carcinogenesis, strategies to reduce the appearance of cancers must involve the interruption of the promotion of these initiated cells.

  13. Observation of non-classical correlations in sequential measurements of photon polarization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.

    2016-10-01

    A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.

  14. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Rio Soliette (haiti): AN International Initiative for Flood-Hazard Assessment and Mitigation

    NASA Astrophysics Data System (ADS)

    Gandolfi, S.; Castellarin, A.; Barbarella, M.; Brath, A.; Domeneghetti, A.; Brandimarte, L.; Di Baldassarre, G.

    2013-01-01

    Natural catastrophic events are one of most critical aspects for health and economy all around the world. However, the impact in a poor region can impact more dramatically than in others countries. Isla Hispaniola (Haiti and the Dominican Republic), one of the poorest regions of the planet, has repeatedly been hit by catastrophic natural disasters that caused incalculable human and economic losses. After the catastrophic flood event occurred in the basin of River Soliette on May 24th, 2004, the General Direction for Development and Cooperation of the Italian Department of Foreign Affairs funded an international cooperation initiative (ICI) coordinated by the University of Bologna, that involved Haitian and Dominican institutions.Main purpose of the ICI was hydrological and hydraulic analysis of the May 2004 flood event aimed at formulating a suitable and affordable flood risk mitigation plan, consisting of structural and non-structural measures. In this contest, a topographic survey was necessary to realize the hydrological model and to improve the knowledge in some areas candidates to be site for mitigation measures.To overcome the difficulties arising from the narrowness of funds, surveyors and limited time available for the survey, only GPS technique have been used, both for framing aspects (using PPP approach), and for geometrical survey of the river by means of river cross-sections and detailed surveys in two areas (RTK technique). This allowed us to reconstruct both the river geometry and the DTM's of two expansion areas (useful for design hydraulic solutions for mitigate flood-hazard risk).

  16. Effects of Orbital Lifetime Reduction on the Long-Term Earth Satellite Population as Modeled by EVOLVE 4.0

    NASA Technical Reports Server (NTRS)

    Krisko, Paula H.; Opiela, John N.; Liou, Jer-Chyi; Anz-Meador, Phillip D.; Theall, Jeffrey R.

    1999-01-01

    The latest update of the NASA orbital debris environment model, EVOLVE 4.0, has been used to study the effect of various proposed debris mitigation measures, including the NASA 25-year guideline. EVOLVE 4.0, which includes updates of the NASA breakup, solar activity, and the orbit propagator models, a GEO analysis option, and non-fragmentation debris source models, allows for the statistical modeling and predicted growth of the particle population >1 mm in characteristic length in LEO and GEO orbits. The initial implementation of this &odel has been to study the sensitivity of the overall LEO debris environment to mitigation measures designed to limit the lifetime of intact objects in LEO orbits. The mitigation measures test matrix for this study included several commonly accepted testing schemes, i.e., the variance of the maximum LEO lifetime from 10 to 50 years, the date of the initial implementation of this policy, the shut off of all explosions at some specified date, and the inclusion of disposal orbits. All are timely studies in that all scenarios have been suggested by researchers and satellite operators as options for the removal of debris from LEO orbits.

  17. Climate mitigation: sustainable preferences and cumulative carbon

    NASA Astrophysics Data System (ADS)

    Buckle, Simon

    2010-05-01

    We develop a stylized AK growth model with both climate damages to ecosystem goods and services and sustainable preferences that allow trade-offs between present discounted utility and long-run climate damages. The simplicity of the model permits analytical solutions. Concern for the long-term provides a strong driver for mitigation action. One plausible specification of sustainable preferences leads to the result that, for a range of initial parameter values, an optimizing agent would choose a level of cumulative carbon dioxide (CO2) emissions independent of initial production capital endowment and CO2 levels. There is no technological change so, for economies with sufficiently high initial capital and CO2 endowments, optimal mitigation will lead to disinvestment. For lower values of initial capital and/or CO2 levels, positive investment can be optimal, but still within the same overall level of cumulative emissions. One striking aspect of the model is the complexity of possible outcomes, in addition to these optimal solutions. We also identify a resource constrained region and several regions where climate damages exceed resources available for consumption. Other specifications of sustainable preferences are discussed, as is the case of a hard constraint on long-run damages. Scientists are currently highlighting the potential importance of the cumulative carbon emissions concept as a robust yet flexible target for climate policymakers. This paper shows that it also has an ethical interpretation: it embodies an implicit trade off in global welfare between present discounted welfare and long-term climate damages. We hope that further development of the ideas presented here might contribute to the research and policy debate on the critical areas of intra- and intergenerational welfare.

  18. Lessons learnt from Dental Patient Safety Case Reports

    PubMed Central

    Obadan, Enihomo M.; Ramoni, Rachel B.; Kalenderian, Elsbeth

    2015-01-01

    Background Errors are commonplace in dentistry, it is therefore our imperative as dental professionals to intercept them before they lead to an adverse event, and/or mitigate their effects when an adverse event occurs. This requires a systematic approach at both the profession-level, encapsulated in the Agency for Healthcare Research and Quality’s Patient Safety Initiative structure, as well as at the practice-level, where Crew Resource Management is a tested paradigm. Supporting patient safety at both the dental practice and profession levels relies on understanding the types and causes of errors, an area in which little is known. Methods A retrospective review of dental adverse events reported in the literature was performed. Electronic bibliographic databases were searched and data were extracted on background characteristics, incident description, case characteristics, clinic setting where adverse event originated, phase of patient care that adverse event was detected, proximal cause, type of patient harm, degree of harm and recovery actions. Results 182 publications (containing 270 cases) were identified through our search. Delayed and unnecessary treatment/disease progression after misdiagnosis was the largest type of harm reported. 24.4% of reviewed cases were reported to have experienced permanent harm. One of every ten case reports reviewed (11.1%) reported that the adverse event resulted in the death of the affected patient. Conclusions Published case reports provide a window into understanding the nature and extent of dental adverse events, but for as much as the findings revealed about adverse events, they also identified the need for more broad-based contributions to our collective body of knowledge about adverse events in the dental office and their causes. Practical Implications Siloed and incomplete contributions to our understanding of adverse events in the dental office are threats to dental patients’ safety. PMID:25925524

  19. Steel Primer Chamber Assemblies for Dual Initiated Pyrovalves

    NASA Technical Reports Server (NTRS)

    Guemsey, Carl S.; Mizukami, Masashi; Zenz, Zac; Pender, Adam A.

    2009-01-01

    A solution was developed to mitigate the potential risk of ignition failures and burn-through in aluminum primer chamber assemblies on pyrovalves. This was accomplished by changing the assembly material from aluminum to steel, and reconfiguration of flame channels to provide more direct paths from initiators to boosters. With the geometric configuration of the channels changed, energy is more efficiently transferred from the initiators to the boosters. With the alloy change to steel, the initiator flame channels do not erode upon firing, eliminating the possibility of burn-through. Flight qualification tests have been successfully passed.

  20. Feature Migration in Time: Reflection of Selective Attention on Speech Errors

    ERIC Educational Resources Information Center

    Nozari, Nazbanou; Dell, Gary S.

    2012-01-01

    This article describes an initial study of the effect of focused attention on phonological speech errors. In 3 experiments, participants recited 4-word tongue twisters and focused attention on 1 (or none) of the words. The attended word was singled out differently in each experiment; participants were under instructions to avoid errors on the…

  1. Characterisation of residual ionospheric errors in bending angles using GNSS RO end-to-end simulations

    NASA Astrophysics Data System (ADS)

    Liu, C. L.; Kirchengast, G.; Zhang, K. F.; Norman, R.; Li, Y.; Zhang, S. C.; Carter, B.; Fritzer, J.; Schwaerz, M.; Choy, S. L.; Wu, S. Q.; Tan, Z. X.

    2013-09-01

    Global Navigation Satellite System (GNSS) radio occultation (RO) is an innovative meteorological remote sensing technique for measuring atmospheric parameters such as refractivity, temperature, water vapour and pressure for the improvement of numerical weather prediction (NWP) and global climate monitoring (GCM). GNSS RO has many unique characteristics including global coverage, long-term stability of observations, as well as high accuracy and high vertical resolution of the derived atmospheric profiles. One of the main error sources in GNSS RO observations that significantly affect the accuracy of the derived atmospheric parameters in the stratosphere is the ionospheric error. In order to mitigate the effect of this error, the linear ionospheric correction approach for dual-frequency GNSS RO observations is commonly used. However, the residual ionospheric errors (RIEs) can be still significant, especially when large ionospheric disturbances occur and prevail such as during the periods of active space weather. In this study, the RIEs were investigated under different local time, propagation direction and solar activity conditions and their effects on RO bending angles are characterised using end-to-end simulations. A three-step simulation study was designed to investigate the characteristics of the RIEs through comparing the bending angles with and without the effects of the RIEs. This research forms an important step forward in improving the accuracy of the atmospheric profiles derived from the GNSS RO technique.

  2. The Incorporation and Initialization of Cloud Water/ice in AN Operational Forecast Model

    NASA Astrophysics Data System (ADS)

    Zhao, Qingyun

    Quantitative precipitation forecasts have been one of the weakest aspects of numerical weather prediction models. Theoretical studies show that the errors in precipitation calculation can arise from three sources: errors in the large-scale forecasts of primary variables, errors in the crude treatment of condensation/evaporation and precipitation processes, and errors in the model initial conditions. A new precipitation parameterization scheme has been developed to investigate the forecast value of improved precipitation physics via the introduction of cloud water and cloud ice into a numerical prediction model. The main feature of this scheme is the explicit calculation of cloud water and cloud ice in both the convective and stratiform precipitation parameterization. This scheme has been applied to the eta model at the National Meteorological Center. Four extensive tests have been performed. The statistical results showed a significant improvement in the model precipitation forecasts. Diagnostic studies suggest that the inclusion of cloud ice is important in transferring water vapor to precipitation and in the enhancement of latent heat release; the latter subsequently affects the vertical motion field significantly. Since three-dimensional cloud data is absent from the analysis/assimilation system for most numerical models, a method has been proposed to incorporate observed precipitation and nephanalysis data into the data assimilation system to obtain the initial cloud field for the eta model. In this scheme, the initial moisture and vertical motion fields are also improved at the same time as cloud initialization. The physical initialization is performed in a dynamical initialization framework that uses the Newtonian dynamical relaxation method to nudge the model's wind and mass fields toward analyses during a 12-hour data assimilation period. Results from a case study showed that a realistic cloud field was produced by this method at the end of the data assimilation period. Precipitation forecasts have been significantly improved as a result of the improved initial cloud, moisture and vertical motion fields.

  3. The Agatston Urban Nutrition Initiative: working to reverse the obesity epidemic through academically based community service.

    PubMed

    Johnston, Francis E

    2009-01-01

    The Agatston Urban Nutrition Initiative (AUNI) presents a fruitful partnership between faculty and students at a premier research university and members of the surrounding community aimed at addressing the problem of childhood obesity. AUNI uses a problem-solving approach to learning by focusing course activities, including service-learning, on understanding and mitigating the obesity culture.

  4. 30 CFR 250.920 - What are the BSEE requirements for assessment of fixed platforms?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Assessment categories are defined in API RP 2A-WSD, Section 17.3 (as incorporated by reference in § 250.198... of API RP 2A-WSD, incorporated by reference in § 250.198, for a description of assessment initiators). (c) You must initiate mitigation actions for platforms that do not pass the assessment process of API...

  5. 30 CFR 250.920 - What are the BSEE requirements for assessment of fixed platforms?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Assessment categories are defined in API RP 2A-WSD, Section 17.3 (as incorporated by reference in § 250.198... of API RP 2A-WSD, incorporated by reference in § 250.198, for a description of assessment initiators). (c) You must initiate mitigation actions for platforms that do not pass the assessment process of API...

  6. 30 CFR 250.920 - What are the BSEE requirements for assessment of fixed platforms?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Assessment categories are defined in API RP 2A-WSD, Section 17.3 (as incorporated by reference in § 250.198... of API RP 2A-WSD, incorporated by reference in § 250.198, for a description of assessment initiators). (c) You must initiate mitigation actions for platforms that do not pass the assessment process of API...

  7. Mitigating Evidentiary Bias in Planning and Policy-Making Comment on "Reflective Practice: How the World Bank Explored Its Own Biases?"

    PubMed

    Parkhurst, Justin

    2016-07-20

    The field of cognitive psychology has increasingly provided scientific insights to explore how humans are subject to unconscious sources of evidentiary bias, leading to errors that can affect judgement and decision-making. Increasingly these insights are being applied outside the realm of individual decision-making to the collective arena of policy-making as well. A recent editorial in this journal has particularly lauded the work of the World Bank for undertaking an open and critical reflection on sources of unconscious bias in its own expert staff that could undermine achievement of its key goals. The World Bank case indeed serves as a remarkable case of a global policy-making agency making its own critical reflections transparent for all to see. Yet the recognition that humans are prone to cognitive errors has been known for centuries, and the scientific exploration of such biases provided by cognitive psychology is now well-established. What still remains to be developed, however, is a widespread body of work that can inform efforts to institutionalise strategies to mitigate the multiple sources and forms of evidentiary bias arising within administrative and policy-making environments. Addressing this gap will require a programme of conceptual and empirical work that supports robust development and evaluation of institutional bias mitigation strategies. The cognitive sciences provides a scientific basis on which to proceed, but a critical priority will now be the application of that science to improve policy-making within those agencies taking responsibility for social welfare and development programmes. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  8. Proposals for the mitigation of the environmental impact of clinical laboratories.

    PubMed

    Lopez, Joseph B; Badrick, Tony

    2012-03-24

    Laboratories should be aware of the carbon footprint resulting from their activities and take steps to mitigate it as part of their societal responsibilities. Once committed to a mitigation programme, they should announce an environmental policy, secure the support of senior management, initiate documentation, institute a staff training programme, schedule environmental audits and appoint an environmental manager. Laboratories may aspire to be accredited to one of the standards for environmental management, such as the ISO 14000. As environmental and quality issues are linked, the improvement in the environmental management of an organisation will ultimately lead to improved quality system performance. Indeed, environmental management could conceivably come under overall quality management. Although there will be initial costs, good environmental practices can bring savings. Environmental improvement should be based on the 3R concept to reduce, reuse and recycle. Several policy initiatives may be introduced. These include a green purchasing policy for equipment, laboratory furniture and reagents as well as the management of packaging wastes. There are several ways to reduce energy, water usage and wastage. A reduction of test numbers and collection tubes should be attempted. Paper management involves all aspects of 3R. The recycling of solvents and general wastes should be practised where feasible. The construction new laboratories or renovations to existing ones are opportunities to make them more environmentally-friendly. The advocacy of policies to associates and the inclusion of environmentally-friendly conditions on contractors are integral parts of the programme.

  9. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    NASA Technical Reports Server (NTRS)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  10. Demonstration of Synthetic Aperture Radar and Hyperspectral Imaging for Wide Area Assessment at Pueblo Precision Bombing Range #2, Colorado

    DTIC Science & Technology

    2008-10-01

    resolution orthophoto and LiDAR datasets, as well as for the vegetation modeling conducted for SAR FAR mitigation. 3.4.4 Navigation Systems An Applanix A...these accuracies. By registering eight cardinal pass-direction images per tile to the orthophotography and to each other, the horizontal error in... orthophoto image, which successfully increased the HSI image resolution to 0.25-m. 22 Table 4. SAR Performance Data. Type of Performance

  11. Upset Characterization of the PowerPC405 Hard-core Processor Embedded in Virtex-II Pro Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Swift, Gary M.; Allen, Gregory S.; Farmanesh, Farhad; George, Jeffrey; Petrick, David J.; Chayab, Fayez

    2006-01-01

    Shown in this presentation are recent results for the upset susceptibility of the various types of memory elements in the embedded PowerPC405 in the Xilinx V2P40 FPGA. For critical flight designs where configuration upsets are mitigated effectively through appropriate design triplication and configuration scrubbing, these upsets of processor elements can dominate the system error rate. Data from irradiations with both protons and heavy ions are given and compared using available models.

  12. Risk Mitigation during Human Electromuscular Incapacitation Research

    DTIC Science & Technology

    2015-04-30

    that simulated different arrest conditions.Error! Bookmark not defined. Group 1 completed a 150 meter sprint followed by a 44-inch wall hurdle...mL thousandth (milli) of a liter mm thousandths (milli) of a meter mM/L thousandths (milli) of a mole per liter NAMRU-SA Naval Medical... sprint plus a 44-inch hurdle, 45 seconds of striking a heavy bag, a 10-second TASER X-25 EMI exposure, sprinting from a trained K-9 unit while wearing a

  13. A study of passive safety features by utilizing intra-subassembly-equipped self-actuated shutdown mechanism for future large fast breeder reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uto, N.; Niwa, H.; Ieda, Y.

    1996-08-01

    Passive prevention of core disruptive accidents (CDAs) is desired in terms of enhancement of safety for future fast breeder reactors. In addition, mitigation of CDA`s consequences should be required because mitigation measures have a potential of applying to all accidents, while prevention measures are prepared for specific accident initiators. In this paper, the Intra-Subassembly-equipped Self-Actuated Shutdown System (IS-SASS) , which is considered effective on passive prevention and mitigation of CDAs, is described. The IS-SASS is introduced in a fuel subassembly and consists of absorber materials at the top of the active core and an inner duct through which molten fuelmore » can be excluded out of the core. The determination of the appropriate number of the IS-SASS units, their arrangement in the core and their suitable structure are found to be suited to prevention and mitigation of CDAs for liquid metal-cooled large fast breeder reactors.« less

  14. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    PubMed Central

    DelSole, T.; Tippett, M.K.; Pegion, K.

    2018-01-01

    Abstract The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real‐time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real‐time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8–10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities. PMID:29937973

  15. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    NASA Astrophysics Data System (ADS)

    Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.

    2018-04-01

    The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.

  16. A model for the prediction of latent errors using data obtained during the development process

    NASA Technical Reports Server (NTRS)

    Gaffney, J. E., Jr.; Martello, S. J.

    1984-01-01

    A model implemented in a program that runs on the IBM PC for estimating the latent (or post ship) content of a body of software upon its initial release to the user is presented. The model employs the count of errors discovered at one or more of the error discovery processes during development, such as a design inspection, as the input data for a process which provides estimates of the total life-time (injected) error content and of the latent (or post ship) error content--the errors remaining a delivery. The model presented presumes that these activities cover all of the opportunities during the software development process for error discovery (and removal).

  17. Exploring Senior Residents' Intraoperative Error Management Strategies: A Potential Measure of Performance Improvement.

    PubMed

    Law, Katherine E; Ray, Rebecca D; D'Angelo, Anne-Lise D; Cohen, Elaine R; DiMarco, Shannon M; Linsmeier, Elyse; Wiegmann, Douglas A; Pugh, Carla M

    The study aim was to determine whether residents' error management strategies changed across 2 simulated laparoscopic ventral hernia (LVH) repair procedures after receiving feedback on their initial performance. We hypothesize that error detection and recovery strategies would improve during the second procedure without hands-on practice. Retrospective review of participant procedural performances of simulated laparoscopic ventral herniorrhaphy. A total of 3 investigators reviewed procedure videos to identify surgical errors. Errors were deconstructed. Error management events were noted, including error identification and recovery. Residents performed the simulated LVH procedures during a course on advanced laparoscopy. Participants had 30 minutes to complete a LVH procedure. After verbal and simulator feedback, residents returned 24 hours later to perform a different, more difficult simulated LVH repair. Senior (N = 7; postgraduate year 4-5) residents in attendance at the course participated in this study. In the first LVH procedure, residents committed 121 errors (M = 17.14, standard deviation = 4.38). Although the number of errors increased to 146 (M = 20.86, standard deviation = 6.15) during the second procedure, residents progressed further in the second procedure. There was no significant difference in the number of errors committed for both procedures, but errors shifted to the late stage of the second procedure. Residents changed the error types that they attempted to recover (χ 2 5 =24.96, p<0.001). For the second procedure, recovery attempts increased for action and procedure errors, but decreased for strategy errors. Residents also recovered the most errors in the late stage of the second procedure (p < 0.001). Residents' error management strategies changed between procedures following verbal feedback on their initial performance and feedback from the simulator. Errors and recovery attempts shifted to later steps during the second procedure. This may reflect residents' error management success in the earlier stages, which allowed further progression in the second simulation. Incorporating error recognition and management opportunities into surgical training could help track residents' learning curve and provide detailed, structured feedback on technical and decision-making skills. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  18. PDC bits break ground with advanced vibration mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-10-01

    Advancements in PDC bit technology have resulted in the identification and characterization of different types of vibrational modes that historically have limited PDC bit performance. As a result, concepts have been developed that prevent the initiation of vibration and also mitigate its damaging effects once it occurs. This vibration-reducing concept ensures more efficient use of the energy available to a PDC bit performance. As a result, concepts have been developed that prevent the imitation of vibration and also mitigate its damaging effects once it occurs. This vibration-reducing concept ensures more efficient use of the energy available to a PDC bit,more » thereby improving its performance. This improved understanding of the complex forces affecting bit performance is driving bit customization for specific drilling programs.« less

  19. Improving receiver performance of diffusive molecular communication with enzymes.

    PubMed

    Noel, Adam; Cheung, Karen C; Schober, Robert

    2014-03-01

    This paper studies the mitigation of intersymbol interference in a diffusive molecular communication system using enzymes that freely diffuse in the propagation environment. The enzymes form reaction intermediates with information molecules and then degrade them so that they cannot interfere with future transmissions. A lower bound expression on the expected number of molecules measured at the receiver is derived. A simple binary receiver detection scheme is proposed where the number of observed molecules is sampled at the time when the maximum number of molecules is expected. Insight is also provided into the selection of an appropriate bit interval. The expected bit error probability is derived as a function of the current and all previously transmitted bits. Simulation results show the accuracy of the bit error probability expression and the improvement in communication performance by having active enzymes present.

  20. Precision of the calibration of the AXAF engineering test article (VETA) mirrors

    NASA Technical Reports Server (NTRS)

    Schwartz, D. A.; Chartas, G.; Hughes, John P.; Kellogg, Edwin M.; Zhao, Ping

    1992-01-01

    Measurements of the VETA encircled energies have been performed at 5 energies within 16 radii ranging from 0.05 to 200 arcseconds. We report here on the analysis of the accuracy of those measurements. A common 'error tree' structure applies, and we present representative numbers for the larger terms. At 0.277, 1.5, and 2.07 keV, and for radii of 3 arcsec and larger, our measurements have estimated 1 sigma errors of 0.6 to 1.5 percent. Effects of measurement statistics and of the VETA test mount limit the accuracy at smaller angles, and modulation by the counter window support structure together with the imperfect position repeatability limit the accuracy for the 0.93 and 2.3 keV energies. We expect to mitigate these limitations when calibrating the complete AXAF flight mirror assembly.

  1. Precision of the calibration of the AXAF Engineering Test Article (VETA) mirrors

    NASA Technical Reports Server (NTRS)

    Schwartz, D. A.; Chartas, G.; Hughes, J. P.; Kellogg, E. M.; Zhao, Ping

    1993-01-01

    Measurements of the VETA encircled energies have been performed at 5 energies within 16 radii ranging from 0.05 to 200 arcseconds. We report here on the analysis of the accuracy of those measurements. A common 'error tree' structure applies, and we present representative numbers for the larger terms. At 0.277, 1.5, and 2.07 keV, and for radii of 3 arcsec and larger, our measurements have estimated 1 sigma errors of 0.6 to 1.5 percent. Effects of measurement statistics and of the VETA test mount limit the accuracy at smaller angles, and modulation by the counter window support structure together with the imperfect position repeatability limit the accuracy for the 0.93 and 2.3 keV energies. We expect to mitigate these limitations when calibrating the complete AXAF flight mirror assembly.

  2. Metameric MIMO-OOK transmission scheme using multiple RGB LEDs.

    PubMed

    Bui, Thai-Chien; Cusani, Roberto; Scarano, Gaetano; Biagi, Mauro

    2018-05-28

    In this work, we propose a novel visible light communication (VLC) scheme utilizing multiple different red green and blue triplets each with a different emission spectrum of red, green and blue for mitigating the effect of interference due to different colors using spatial multiplexing. On-off keying modulation is considered and its effect on light emission in terms of flickering, dimming and color rendering is discussed so as to demonstrate how metameric properties have been considered. At the receiver, multiple photodiodes with color filter-tuned on each transmit light emitting diode (LED) are employed. Three different detection mechanisms of color zero forcing, minimum mean square error estimation and minimum mean square error equalization are then proposed. The system performance of the proposed scheme is evaluated both with computer simulations and tests with an Arduino board implementation.

  3. An Overview of the NASA Aviation Safety Program (AVSP) Systemwide Accident Prevention (SWAP) Human Performance Modeling (HPM) Element

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Goodman, Allen; Hooley, Becky L.

    2003-01-01

    An overview is provided of the Human Performance Modeling (HPM) element within the NASA Aviation Safety Program (AvSP). Two separate model development tracks for performance modeling of real-world aviation environments are described: the first focuses on the advancement of cognitive modeling tools for system design, while the second centers on a prescriptive engineering model of activity tracking for error detection and analysis. A progressive implementation strategy for both tracks is discussed in which increasingly more complex, safety-relevant applications are undertaken to extend the state-of-the-art, as well as to reveal potential human-system vulnerabilities in the aviation domain. Of particular interest is the ability to predict the precursors to error and to assess potential mitigation strategies associated with the operational use of future flight deck technologies.

  4. Trainable hardware for dynamical computing using error backpropagation through physical media.

    PubMed

    Hermans, Michiel; Burm, Michaël; Van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter

    2015-03-24

    Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation-a crucial step for tuning such systems towards a specific task-can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.

  5. Individualized Cognitive Modeling for Close-Loop Task Mitigation

    NASA Technical Reports Server (NTRS)

    Zhang, Guangfan; Xu, Roger; Wang, Wei; Li, Jiang; Schnell, Tom; Keller, Mike

    2010-01-01

    An accurate real-time operator functional state assessment makes it possible to perform task management, minimize risks, and improve mission performance. In this paper, we discuss the development of an individualized operator functional state assessment model that identifies states likely leading to operational errors. To address large individual variations, we use two different approaches to build a model for each individual using its data as well as data from subjects with similar responses. If a subject's response is similar to that of the individual of interest in a specific functional state, all the training data from this subject will be used to build the individual model. The individualization methods have been successfully verified and validated with a driving test data set provided by University of Iowa. With the individualized models, the mean squared error can be significantly decreased (by around 20%).

  6. Trainable hardware for dynamical computing using error backpropagation through physical media

    NASA Astrophysics Data System (ADS)

    Hermans, Michiel; Burm, Michaël; van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter

    2015-03-01

    Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation—a crucial step for tuning such systems towards a specific task—can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.

  7. An Adverse Drug Event and Medication Error Reporting System for Ambulatory Care (MEADERS)

    PubMed Central

    Zafar, Atif; Hickner, John; Pace, Wilson; Tierney, William

    2008-01-01

    The Institute of Medicine (IOM) has identified the mitigation of Adverse Drug Events (ADEs) and Medication Errors (MEs) as top national priorities. Currently available reporting tools are fraught with inefficiencies that prevent widespread adoption into busy primary care practices. Using expert panel input we designed and built a new reporting tool that could be used in these settings with a variety of information technology capabilities. We pilot tested the system in four Practice Based Research Networks (PBRNs) comprising 24 practices. Over 10 weeks we recorded 507 reports, of which 370 were MEs and 137 were ADEs. Clinicians found the system easy to use, with the average time to generating a report under 4 minutes. By using streamlined interface design techniques we were successfully able to improve reporting rates of ADEs and MEs in these practices. PMID:18999053

  8. Improving the safety of vaccine delivery.

    PubMed

    Evans, Huw P; Cooper, Alison; Williams, Huw; Carson-Stevens, Andrew

    2016-05-03

    Vaccines save millions of lives per annum as an integral part of community primary care provision worldwide. Adverse events due to the vaccine delivery process outnumber those arising from the pharmacological properties of the vaccines themselves. Whilst one in three patients receiving a vaccine will encounter some form of error, little is known about their underlying causes and how to mitigate them in practice. Patient safety incident reporting systems and adverse drug event surveillance offer a rich opportunity for understanding the underlying causes of those errors. Reducing harm relies on the identification and implementation of changes to improve vaccine safety at multiple levels: from patient interventions through to organizational actions at local, national and international levels. Here we highlight the potential for maximizing learning from patient safety incident reports to improve the quality and safety of vaccine delivery.

  9. Tropical forecasting - Predictability perspective

    NASA Technical Reports Server (NTRS)

    Shukla, J.

    1989-01-01

    Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.

  10. Adaptation of catch-up saccades during the initiation of smooth pursuit eye movements.

    PubMed

    Schütz, Alexander C; Souto, David

    2011-04-01

    Reduction of retinal speed and alignment of the line of sight are believed to be the respective primary functions of smooth pursuit and saccadic eye movements. As the eye muscles strength can change in the short-term, continuous adjustments of motor signals are required to achieve constant accuracy. While adaptation of saccade amplitude to systematic position errors has been extensively studied, we know less about the adaptive response to position errors during smooth pursuit initiation, when target motion has to be taken into account to program saccades, and when position errors at the saccade endpoint could also be corrected by increasing pursuit velocity. To study short-term adaptation (250 adaptation trials) of tracking eye movements, we introduced a position error during the first catch-up saccade made during the initiation of smooth pursuit-in a ramp-step-ramp paradigm. The target position was either shifted in the direction of the horizontally moving target (forward step), against it (backward step) or orthogonally to it (vertical step). Results indicate adaptation of catch-up saccade amplitude to back and forward steps. With vertical steps, saccades became oblique, by an inflexion of the early or late saccade trajectory. With a similar time course, post-saccadic pursuit velocity was increased in the step direction, adding further evidence that under some conditions pursuit and saccades can act synergistically to reduce position errors.

  11. A framework for software fault tolerance in real-time systems

    NASA Technical Reports Server (NTRS)

    Anderson, T.; Knight, J. C.

    1983-01-01

    A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.

  12. Mitigating Diminishing Manufacturing Sources Material Shortages (DMS/MS) and Obsolescence for the T-6 Canopy Fracturing Initiation System (CFIS)

    DTIC Science & Technology

    2012-03-01

    6 CANOPY FRACTURING INITIATION SYSTEM (CFIS) THESIS Richard P. Carrano, DP- 5 , USN AFIT/GSE/ENV/12-M01DL DEPARTMENT OF THE AIR FORCE AIR...inventory. This trial is the optimal solution for three reasons: 45 1) The current retrofit line operates at a rate of approximately 5 - 6 ...SOURCES/MATERIAL SHORTAGES (DMS/MS) AND OBSOLESCENCE FOR THE T- 6 CANOPY FRACTURING INITIATION SYSTEM (CFIS) THESIS Presented to the Faculty

  13. Identification of factors associated with diagnostic error in primary care.

    PubMed

    Minué, Sergio; Bermúdez-Tamayo, Clara; Fernández, Alberto; Martín-Martín, José Jesús; Benítez, Vivian; Melguizo, Miguel; Caro, Araceli; Orgaz, María José; Prados, Miguel Angel; Díaz, José Enrique; Montoro, Rafael

    2014-05-12

    Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.

  14. Identification of factors associated with diagnostic error in primary care

    PubMed Central

    2014-01-01

    Background Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason’s taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Methods Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician’s initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians’ perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. Discussion This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process. PMID:24884984

  15. Effective empirical corrections for basis set superposition error in the def2-SVPD basis: gCP and DFT-C

    NASA Astrophysics Data System (ADS)

    Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin

    2017-06-01

    With the aim of mitigating the basis set error in density functional theory (DFT) calculations employing local basis sets, we herein develop two empirical corrections for basis set superposition error (BSSE) in the def2-SVPD basis, a basis which—when stripped of BSSE—is capable of providing near-complete-basis DFT results for non-covalent interactions. Specifically, we adapt the existing pairwise geometrical counterpoise (gCP) approach to the def2-SVPD basis, and we develop a beyond-pairwise approach, DFT-C, which we parameterize across a small set of intermolecular interactions. Both gCP and DFT-C are evaluated against the traditional Boys-Bernardi counterpoise correction across a set of 3402 non-covalent binding energies and isomerization energies. We find that the DFT-C method represents a significant improvement over gCP, particularly for non-covalently-interacting molecular clusters. Moreover, DFT-C is transferable among density functionals and can be combined with existing functionals—such as B97M-V—to recover large-basis results at a fraction of the cost.

  16. Experimental Verification of Sparse Aperture Mask for Low Order Wavefront Sensing

    NASA Astrophysics Data System (ADS)

    Subedi, Hari; Kasdin, N. Jeremy

    2017-01-01

    To directly image exoplanets, future space-based missions are equipped with coronagraphs which manipulate the diffraction of starlight and create regions of high contrast called dark holes. Theoretically, coronagraphs can be designed to achieve the high level of contrast required to image exoplanets, which are billions of times dimmer than their host stars, however the aberrations caused by optical imperfections and thermal fluctuations cause the degradation of contrast in the dark holes. Focal plane wavefront control (FPWC) algorithms using deformable mirrors (DMs) are used to mitigate the quasi-static aberrations caused by optical imperfections. Although the FPWC methods correct the quasi-static aberrations, they are blind to dynamic errors caused by telescope jitter and thermal fluctuations. At Princeton's High Contrast Imaging Lab we have developed a new technique that integrates a sparse aperture mask with the coronagraph to estimate these low-order dynamic wavefront errors. This poster shows the effectiveness of a SAM Low-Order Wavefront Sensor in estimating and correcting these errors via simulation and experiment and compares the results to other methods, such as the Zernike Wavefront Sensor planned for WFIRST.

  17. The upside of noise: engineered dissipation as a resource in superconducting circuits

    NASA Astrophysics Data System (ADS)

    Kapit, Eliot

    2017-09-01

    Historically, noise in superconducting circuits has been considered an obstacle to be removed. A large fraction of the research effort in designing superconducting circuits has focused on noise reduction, with great success, as coherence times have increased by four orders of magnitude in the past two decades. However, noise and dissipation can never be fully eliminated, and further, a rapidly growing body of theoretical and experimental work has shown that carefully tuned noise, in the form of engineered dissipation, can be a profoundly useful tool in designing and operating quantum circuits. In this article, I review important applications of engineered dissipation, including state generation, state stabilization, and autonomous quantum error correction, where engineered dissipation can mitigate the effect of intrinsic noise, reducing logical error rates in quantum information processing. Further, I provide a pedagogical review of the basic noise processes in superconducting qubits (photon loss and phase noise), and argue that any dissipative mechanism which can correct photon loss errors is very likely to automatically suppress dephasing. I also discuss applications for quantum simulation, and possible future research directions.

  18. Effectively parameterizing dissipative particle dynamics using COSMO-SAC: A partition coefficient study

    NASA Astrophysics Data System (ADS)

    Saathoff, Jonathan

    2018-04-01

    Dissipative Particle Dynamics (DPD) provides a tool for studying phase behavior and interfacial phenomena for complex mixtures and macromolecules. Methods to quickly and automatically parameterize DPD greatly increase its effectiveness. One such method is to map predicted activity coefficients derived from COSMO-SAC onto DPD parameter sets. However, there are serious limitations to the accuracy of this mapping, including the inability of single DPD beads to reproduce asymmetric infinite dilution activity coefficients, the loss of precision when reusing parameters for different molecular fragments, and the error due to bonding beads together. This report describes these effects in quantitative detail and provides methods to mitigate much of their deleterious effects. This includes a novel approach to remove errors caused by bonding DPD beads together. Using these methods, logarithm hexane/water partition coefficients were calculated for 61 molecules. The root mean-squared error for these calculations was determined to be 0.14—a very low value—with respect to the final mapping procedure. Cognizance of the above limitations can greatly enhance the predictive power of DPD.

  19. Mitigation of Angle Tracking Errors Due to Color Dependent Centroid Shifts in SIM-Lite

    NASA Technical Reports Server (NTRS)

    Nemati, Bijan; An, Xin; Goullioud, Renaud; Shao, Michael; Shen, Tsae-Pyng; Wehmeier, Udo J.; Weilert, Mark A.; Wang, Xu; Werne, Thomas A.; Wu, Janet P.; hide

    2010-01-01

    The SIM-Lite astrometric interferometer will search for Earth-size planets in the habitable zones of nearby stars. In this search the interferometer will monitor the astrometric position of candidate stars relative to nearby reference stars over the course of a 5 year mission. The elemental measurement is the angle between a target star and a reference star. This is a two-step process, in which the interferometer will each time need to use its controllable optics to align the starlight in the two arms with each other and with the metrology beams. The sensor for this alignment is an angle tracking CCD camera. Various constraints in the design of the camera subject it to systematic alignment errors when observing a star of one spectrum compared with a start of a different spectrum. This effect is called a Color Dependent Centroid Shift (CDCS) and has been studied extensively with SIM-Lite's SCDU testbed. Here we describe results from the simulation and testing of this error in the SCDU testbed, as well as effective ways that it can be reduced to acceptable levels.

  20. Pricing and hedging derivative securities with neural networks: Bayesian regularization, early stopping, and bagging.

    PubMed

    Gençay, R; Qi, M

    2001-01-01

    We study the effectiveness of cross validation, Bayesian regularization, early stopping, and bagging to mitigate overfitting and improving generalization for pricing and hedging derivative securities with daily S&P 500 index daily call options from January 1988 to December 1993. Our results indicate that Bayesian regularization can generate significantly smaller pricing and delta-hedging errors than the baseline neural-network (NN) model and the Black-Scholes model for some years. While early stopping does not affect the pricing errors, it significantly reduces the hedging error (HE) in four of the six years we investigated. Although computationally most demanding, bagging seems to provide the most accurate pricing and delta hedging. Furthermore, the standard deviation of the MSPE of bagging is far less than that of the baseline model in all six years, and the standard deviation of the average HE of bagging is far less than that of the baseline model in five out of six years. We conclude that they be used at least in cases when no appropriate hints are available.

  1. Philosophy of science and the diagnostic process.

    PubMed

    Willis, Brian H; Beebee, Helen; Lasserson, Daniel S

    2013-10-01

    This is an overview of the principles that underpin philosophy of science and how they may provide a framework for the diagnostic process. Although philosophy dates back to antiquity, it is only more recently that philosophers have begun to enunciate the scientific method. Since Aristotle formulated deduction, other modes of reasoning including induction, inference to best explanation, falsificationism, theory-laden observations and Bayesian inference have emerged. Thus, rather than representing a single overriding dogma, the scientific method is a toolkit of ideas and principles of reasoning. Here we demonstrate that the diagnostic process is an example of science in action and is therefore subject to the principles encompassed by the scientific method. Although a number of the different forms of reasoning are used readily by clinicians in practice, without a clear understanding of their pitfalls and the assumptions on which they are based, it leaves doctors open to diagnostic error. We conclude by providing a case example from the medico-legal literature in which diagnostic errors were made, to illustrate how applying the scientific method may mitigate the chance for diagnostic error.

  2. The role of model errors represented by nonlinear forcing singular vector tendency error in causing the "spring predictability barrier" within ENSO predictions

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2017-04-01

    Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.

  3. Simulation of water-table aquifers using specified saturated thickness

    USGS Publications Warehouse

    Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.

    2014-01-01

    Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.

  4. Interaction of finger enslaving and error compensation in multiple finger force production.

    PubMed

    Martin, Joel R; Latash, Mark L; Zatsiorsky, Vladimir M

    2009-01-01

    Previous studies have documented two patterns of finger interaction during multi-finger pressing tasks, enslaving and error compensation, which do not agree with each other. Enslaving is characterized by positive correlation between instructed (master) and non-instructed (slave) finger(s) while error compensation can be described as a pattern of negative correlation between master and slave fingers. We hypothesize that pattern of finger interaction, enslaving or compensation depends on the initial force level and the magnitude of the targeted force change. Subjects were instructed to press with four fingers (I index, M middle, R ring, and L little) from a specified initial force to target forces following a ramp target line. Force-force relations between master and each of three slave fingers were analyzed during the ramp phase of trials by calculating correlation coefficients within each master-slave pair and then two-factor ANOVA was performed to determine effect of initial force and force increase on the correlation coefficients. It was found that, as initial force increased, the value of the correlation coefficient decreased and in some cases became negative, i.e. the enslaving transformed into error compensation. Force increase magnitude had a smaller effect on the correlation coefficients. The observations support the hypothesis that the pattern of inter-finger interaction--enslaving or compensation--depends on the initial force level and, to a smaller degree, on the targeted magnitude of the force increase. They suggest that the controller views tasks with higher steady-state forces and smaller force changes as implying a requirement to avoid large changes in the total force.

  5. Addressing software security risk mitigations in the life cycle

    NASA Technical Reports Server (NTRS)

    Gilliam, David; Powell, John; Haugh, Eric; Bishop, Matt

    2003-01-01

    The NASA Office of Safety and Mission Assurance (OSMA) has funded the Jet Propulsion Laboratory (JPL) with a Center Initiative, 'Reducing Software Security Risk through an Integrated Approach' (RSSR), to address this need. The Initiative is a formal approach to addressing software security in the life cycle through the instantiation of a Software Security Assessment Instrument (SSAI) for the development and maintenance life cycles.

  6. Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiala, David J; Mueller, Frank; Engelmann, Christian

    Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less

  7. Multiple Intravenous Infusions Phase 2b: Laboratory Study

    PubMed Central

    Pinkney, Sonia; Fan, Mark; Chan, Katherine; Koczmara, Christine; Colvin, Christopher; Sasangohar, Farzan; Masino, Caterina; Easty, Anthony; Trbovich, Patricia

    2014-01-01

    Background Administering multiple intravenous (IV) infusions to a single patient via infusion pump occurs routinely in health care, but there has been little empirical research examining the risks associated with this practice or ways to mitigate those risks. Objectives To identify the risks associated with multiple IV infusions and assess the impact of interventions on nurses’ ability to safely administer them. Data Sources and Review Methods Forty nurses completed infusion-related tasks in a simulated adult intensive care unit, with and without interventions (i.e., repeated-measures design). Results Errors were observed in completing common tasks associated with the administration of multiple IV infusions, including the following (all values from baseline, which was current practice): setting up and programming multiple primary continuous IV infusions (e.g., 11.7% programming errors) identifying IV infusions (e.g., 7.7% line-tracing errors) managing dead volume (e.g., 96.0% flush rate errors following IV syringe dose administration) setting up a secondary intermittent IV infusion (e.g., 11.3% secondary clamp errors) administering an IV pump bolus (e.g., 11.5% programming errors) Of 10 interventions tested, 6 (1 practice, 3 technology, and 2 educational) significantly decreased or even eliminated errors compared to baseline. Limitations The simulation of an adult intensive care unit at 1 hospital limited the ability to generalize results. The study results were representative of nurses who received training in the interventions but had little experience using them. The longitudinal effects of the interventions were not studied. Conclusions Administering and managing multiple IV infusions is a complex and risk-prone activity. However, when a patient requires multiple IV infusions, targeted interventions can reduce identified risks. A combination of standardized practice, technology improvements, and targeted education is required. PMID:26316919

  8. Particle Tracking on the BNL Relativistic Heavy Ion Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell, G. F.

    1986-08-07

    Tracking studies including the effects of random multipole errors as well as the effects of random and systematic multipole errors have been made for RHIC. Initial results for operating at an off diagonal working point are discussed.

  9. Nonlinear grid error effects on numerical solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Dey, S. K.

    1980-01-01

    Finite difference solutions of nonlinear partial differential equations require discretizations and consequently grid errors are generated. These errors strongly affect stability and convergence properties of difference models. Previously such errors were analyzed by linearizing the difference equations for solutions. Properties of mappings of decadence were used to analyze nonlinear instabilities. Such an analysis is directly affected by initial/boundary conditions. An algorithm was developed, applied to nonlinear Burgers equations, and verified computationally. A preliminary test shows that Navier-Stokes equations may be treated similarly.

  10. Automated Classification of Phonological Errors in Aphasic Language

    PubMed Central

    Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.

    1984-01-01

    Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.

  11. Demonstration of Nonlinearity Bias in the Measurement of the Apparent Diffusion Coefficient in Multicenter Trials

    PubMed Central

    Malyarenko, Dariya; Newitt, David; Wilmes, Lisa; Tudorica, Alina; Helmer, Karl G.; Arlinghaus, Lori R.; Jacobs, Michael A.; Jajamovich, Guido; Taouli, Bachir; Yankeelov, Thomas E.; Huang, Wei; Chenevert, Thomas L.

    2015-01-01

    Purpose Characterize system-specific bias across common magnetic resonance imaging (MRI) platforms for quantitative diffusion measurements in multicenter trials. Methods Diffusion weighted imaging (DWI) was performed on an ice-water phantom along the superior-inferior (SI) and right-left (RL) orientations spanning ±150 mm. The same scanning protocol was implemented on 14 MRI systems at seven imaging centers. The bias was estimated as a deviation of measured from known apparent diffusion coefficient (ADC) along individual DWI directions. The relative contributions of gradient nonlinearity, shim errors, imaging gradients and eddy currents were assessed independently. The observed bias errors were compared to numerical models. Results The measured systematic ADC errors scaled quadratically with offset from isocenter, and ranged between −55% (SI) and 25% (RL). Nonlinearity bias was dependent on system design and diffusion gradient direction. Consistent with numerical models, minor ADC errors (±5%) due to shim, imaging and eddy currents were mitigated by double echo DWI and image co-registration of individual gradient directions. Conclusion The analysis confirms gradient nonlinearity as a major source of spatial DW bias and variability in off-center ADC measurements across MRI platforms, with minor contributions from shim, imaging gradients and eddy currents. The developed protocol enables empiric description of systematic bias in multicenter quantitative DWI studies. PMID:25940607

  12. Torus Approach in Gravity Field Determination from Simulated GOCE Gravity Gradients

    NASA Astrophysics Data System (ADS)

    Liu, Huanling; Wen, Hanjiang; Xu, Xinyu; Zhu, Guangbin

    2016-08-01

    In Torus approach, observations are projected to the nominal orbits with constant radius and inclination, lumped coefficients provides a linear relationship between observations and spherical harmonic coefficients. Based on the relationship, two-dimensional FFT and block-diagonal least-squares adjustment are used to recover Earth's gravity field model. The Earth's gravity field model complete to degree and order 200 is recovered using simulated satellite gravity gradients on a torus grid, and the degree median error is smaller than 10-18, which shows the effectiveness of Torus approach. EGM2008 is employed as a reference model and the gravity field model is resolved using the simulated observations without noise given on GOCE orbits of 61 days. The error from reduction and interpolation can be mitigated by iterations. Due to polar gap, the precision of low-order coefficients is lower. Without considering these coefficients the maximum geoid degree error and cumulative error are 0.022mm and 0.099mm, respectively. The Earth's gravity field model is also recovered from simulated observations with white noise 5mE/Hz1/2, which is compared to that from direct method. In conclusion, it is demonstrated that Torus approach is a valid method for processing massive amount of GOCE gravity gradients.

  13. Demonstration of nonlinearity bias in the measurement of the apparent diffusion coefficient in multicenter trials.

    PubMed

    Malyarenko, Dariya I; Newitt, David; J Wilmes, Lisa; Tudorica, Alina; Helmer, Karl G; Arlinghaus, Lori R; Jacobs, Michael A; Jajamovich, Guido; Taouli, Bachir; Yankeelov, Thomas E; Huang, Wei; Chenevert, Thomas L

    2016-03-01

    Characterize system-specific bias across common magnetic resonance imaging (MRI) platforms for quantitative diffusion measurements in multicenter trials. Diffusion weighted imaging (DWI) was performed on an ice-water phantom along the superior-inferior (SI) and right-left (RL) orientations spanning ± 150 mm. The same scanning protocol was implemented on 14 MRI systems at seven imaging centers. The bias was estimated as a deviation of measured from known apparent diffusion coefficient (ADC) along individual DWI directions. The relative contributions of gradient nonlinearity, shim errors, imaging gradients, and eddy currents were assessed independently. The observed bias errors were compared with numerical models. The measured systematic ADC errors scaled quadratically with offset from isocenter, and ranged between -55% (SI) and 25% (RL). Nonlinearity bias was dependent on system design and diffusion gradient direction. Consistent with numerical models, minor ADC errors (± 5%) due to shim, imaging and eddy currents were mitigated by double echo DWI and image coregistration of individual gradient directions. The analysis confirms gradient nonlinearity as a major source of spatial DW bias and variability in off-center ADC measurements across MRI platforms, with minor contributions from shim, imaging gradients and eddy currents. The developed protocol enables empiric description of systematic bias in multicenter quantitative DWI studies. © 2015 Wiley Periodicals, Inc.

  14. Cone-Beam CT Assessment of Interfraction and Intrafraction Setup Error of Two Head-and-Neck Cancer Thermoplastic Masks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Velec, Michael; Waldron, John N.; O'Sullivan, Brian

    2010-03-01

    Purpose: To prospectively compare setup error in standard thermoplastic masks and skin-sparing masks (SSMs) modified with low neck cutouts for head-and-neck intensity-modulated radiation therapy (IMRT) patients. Methods and Materials: Twenty head-and-neck IMRT patients were randomized to be treated in a standard mask (SM) or SSM. Cone-beam computed tomography (CBCT) scans, acquired daily after both initial setup and any repositioning, were used for initial and residual interfraction evaluation, respectively. Weekly, post-IMRT CBCT scans were acquired for intrafraction setup evaluation. The population random (sigma) and systematic (SIGMA) errors were compared for SMs and SSMs. Skin toxicity was recorded weekly by use ofmore » Radiation Therapy Oncology Group criteria. Results: We evaluated 762 CBCT scans in 11 patients randomized to the SM and 9 to the SSM. Initial interfraction sigma was 1.6 mm or less or 1.1 deg. or less for SM and 2.0 mm or less and 0.8 deg. for SSM. Initial interfraction SIGMA was 1.0 mm or less or 1.4 deg. or less for SM and 1.1 mm or less or 0.9 deg. or less for SSM. These errors were reduced before IMRT with CBCT image guidance with no significant differences in residual interfraction or intrafraction uncertainties between SMs and SSMs. Intrafraction sigma and SIGMA were less than 1 mm and less than 1 deg. for both masks. Less severe skin reactions were observed in the cutout regions of the SSM compared with non-cutout regions. Conclusions: Interfraction and intrafraction setup error is not significantly different for SSMs and conventional masks in head-and-neck radiation therapy. Mask cutouts should be considered for these patients in an effort to reduce skin toxicity.« less

  15. AID-initiated purposeful mutations in immunoglobulin genes.

    PubMed

    Goodman, Myron F; Scharff, Matthew D; Romesberg, Floyd E

    2007-01-01

    Exposure brings risk to all living organisms. Using a remarkably effective strategy, higher vertebrates mitigate risk by mounting a complex and sophisticated immune response to counter the potentially toxic invasion by a virtually limitless army of chemical and biological antagonists. Mutations are almost always deleterious, but in the case of antibody diversification there are mutations occurring at hugely elevated rates within the variable (V) and switch regions (SR) of the immunoglobulin (Ig) genes that are responsible for binding to and neutralizing foreign antigens throughout the body. These mutations are truly purposeful. This chapter is centered on activation-induced cytidine deaminase (AID). AID is required for initiating somatic hypermutation (SHM) in the V regions and class switch recombination (CSR) in the SR portions of Ig genes. By converting C --> U, while transcription takes place, AID instigates a cascade of mutational events involving error-prone DNA polymerases, base excision and mismatch repair enzymes, and recombination pathways. Together, these processes culminate in highly mutated antibody genes and the B cells expressing antibodies that have achieved optimal antigenic binding undergo positive selection in germinal centers. We will discuss the biological role of AID in this complex process, primarily in terms of its biochemical properties in relation to SHM in vivo. The chapter also discusses recent advances in experimental methods to characterize antibody dynamics as a function of SHM to help elucidate the role that the AID-induced mutations play in tailoring molecular recognition. The emerging experimental techniques help to address long-standing conundrums concerning evolution-imposed constraints on antibody structure and function.

  16. Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions

    PubMed Central

    Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.

    2010-01-01

    Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256

  17. Automated River Reach Definition Strategies: Applications for the Surface Water and Ocean Topography Mission

    NASA Astrophysics Data System (ADS)

    Frasson, Renato Prata de Moraes; Wei, Rui; Durand, Michael; Minear, J. Toby; Domeneghetti, Alessio; Schumann, Guy; Williams, Brent A.; Rodriguez, Ernesto; Picamilh, Christophe; Lion, Christine; Pavelsky, Tamlin; Garambois, Pierre-André

    2017-10-01

    The upcoming Surface Water and Ocean Topography (SWOT) mission will measure water surface heights and widths for rivers wider than 100 m. At its native resolution, SWOT height errors are expected to be on the order of meters, which prevent the calculation of water surface slopes and the use of slope-dependent discharge equations. To mitigate height and width errors, the high-resolution measurements will be grouped into reaches (˜5 to 15 km), where slope and discharge are estimated. We describe three automated river segmentation strategies for defining optimum reaches for discharge estimation: (1) arbitrary lengths, (2) identification of hydraulic controls, and (3) sinuosity. We test our methodologies on 9 and 14 simulated SWOT overpasses over the Sacramento and the Po Rivers, respectively, which we compare against hydraulic models of each river. Our results show that generally, height, width, and slope errors decrease with increasing reach length. However, the hydraulic controls and the sinuosity methods led to better slopes and often height errors that were either smaller or comparable to those of arbitrary reaches of compatible sizes. Estimated discharge errors caused by the propagation of height, width, and slope errors through the discharge equation were often smaller for sinuosity (on average 8.5% for the Sacramento and 6.9% for the Po) and hydraulic control (Sacramento: 7.3% and Po: 5.9%) reaches than for arbitrary reaches of comparable lengths (Sacramento: 8.6% and Po: 7.8%). This analysis suggests that reach definition methods that preserve the hydraulic properties of the river network may lead to better discharge estimates.

  18. Implicit Monte Carlo with a linear discontinuous finite element material solution and piecewise non-constant opacity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.

    2016-02-23

    Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examinemore » the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T 3, but we formulate and test a slight extension for opacities ~ 1/T 3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.« less

  19. Influence of Forecast Accuracy of Photovoltaic Power Output on Capacity Optimization of Microgrid Composition under 30 min Power Balancing Control

    NASA Astrophysics Data System (ADS)

    Sone, Akihito; Kato, Takeyoshi; Shimakage, Toyonari; Suzuoki, Yasuo

    A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). If a number of MGs are controlled to maintain the predetermined electricity demand including RE-based DGs as negative demand, they would contribute to supply-demand balancing of whole electric power system. For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on a demonstrative study on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization. Three forecast cases with different accuracy are compared. The main results are as follows. Even with no forecast error during every 30 min. as the ideal forecast method, the required capacity of NaS battery reaches about 40% of PVS capacity for mitigating the instantaneous forecast error within 30 min. The required capacity to compensate for the forecast error is doubled with the actual forecast method. The influence of forecast error can be reduced by adjusting the scheduled power output of controllable DGs according to the weather forecast. Besides, the required capacity can be reduced significantly if the error of balancing control in a MG is acceptable for a few percentages of periods, because the total periods of large forecast error is not so often.

  20. Climate change and eHealth: a promising strategy for health sector mitigation and adaptation

    PubMed Central

    Holmner, Åsa; Rocklöv, Joacim; Ng, Nawi; Nilsson, Maria

    2012-01-01

    Climate change is one of today's most pressing global issues. Policies to guide mitigation and adaptation are needed to avoid the devastating impacts of climate change. The health sector is a significant contributor to greenhouse gas emissions in developed countries, and its climate impact in low-income countries is growing steadily. This paper reviews and discusses the literature regarding health sector mitigation potential, known and hypothetical co-benefits, and the potential of health information technology, such as eHealth, in climate change mitigation and adaptation. The promising role of eHealth as an adaptation strategy to reduce societal vulnerability to climate change, and the link's between mitigation and adaptation, are also discussed. The topic of environmental eHealth has gained little attention to date, despite its potential to contribute to more sustainable and green health care. A growing number of local and global initiatives on ‘green information and communication technology (ICT)’ are now mentioning eHealth as a promising technology with the potential to reduce emission rates from ICT use. However, the embracing of eHealth is slow because of limitations in technological infrastructure, capacity and political will. Further research on potential emissions reductions and co-benefits with green ICT, in terms of health outcomes and economic effectiveness, would be valuable to guide development and implementation of eHealth in health sector mitigation and adaptation policies. PMID:22679398

  1. Climate change and eHealth: a promising strategy for health sector mitigation and adaptation.

    PubMed

    Holmner, Asa; Rocklöv, Joacim; Ng, Nawi; Nilsson, Maria

    2012-01-01

    Climate change is one of today's most pressing global issues. Policies to guide mitigation and adaptation are needed to avoid the devastating impacts of climate change. The health sector is a significant contributor to greenhouse gas emissions in developed countries, and its climate impact in low-income countries is growing steadily. This paper reviews and discusses the literature regarding health sector mitigation potential, known and hypothetical co-benefits, and the potential of health information technology, such as eHealth, in climate change mitigation and adaptation. The promising role of eHealth as an adaptation strategy to reduce societal vulnerability to climate change, and the link's between mitigation and adaptation, are also discussed. The topic of environmental eHealth has gained little attention to date, despite its potential to contribute to more sustainable and green health care. A growing number of local and global initiatives on 'green information and communication technology (ICT)' are now mentioning eHealth as a promising technology with the potential to reduce emission rates from ICT use. However, the embracing of eHealth is slow because of limitations in technological infrastructure, capacity and political will. Further research on potential emissions reductions and co-benefits with green ICT, in terms of health outcomes and economic effectiveness, would be valuable to guide development and implementation of eHealth in health sector mitigation and adaptation policies.

  2. A Modified MinMax k-Means Algorithm Based on PSO.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    The MinMax k -means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k -means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k -means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k -means algorithm and the original MinMax k -means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.

  3. Chasing the TIRS ghosts: calibrating the Landsat 8 thermal bands

    NASA Astrophysics Data System (ADS)

    Schott, John R.; Gerace, Aaron; Raqueno, Nina; Ientilucci, Emmett; Raqueno, Rolando; Lunsford, Allen W.

    2014-10-01

    The Thermal Infrared Sensor (TIRS) on board Landsat 8 has exhibited a number of anomalous characteristics that have made it difficult to calibrate. These anomalies include differences in the radiometric appearance across the blackbody pre- and post-launch, variations in the cross calibration ratios between detectors that overlap on adjacent arrays (resulting in banding) and bias errors in the absolute calibration that can change spatially/temporally. Several updates to the TIRS calibration procedures were made in the months after launch to attempt to mitigate the impact of these anomalies on flat fielding (cosmetic removal of banding and striping) and mean level bias correction. As a result, banding and striping variations have been reduced but not eliminated and residual bias errors in band 10 should be less than 2 degrees for most targets but can be significantly more in some cases and are often larger in band 11. These corrections have all been essentially ad hoc without understanding or properly accounting for the source of the anomalies, which were, at the time unknown. This paper addresses the procedures that have been undertaken to; better characterize the nature of these anomalies, attempt to identify the source(s) of the anomalies, quantify the phenomenon responsible for them, and develop correction procedures to more effectively remove the impacts on the radiometric products. Our current understanding points to all of the anomalies being the result of internal reflections of energy from outside the target detector's field-of-view, and often outside the telescope field-of-view, onto the target detector. This paper discusses how various members of the Landsat calibration team discovered the clues that led to how; these "ghosts" were identified, they are now being characterized, and their impact can hopefully eventually be corrected. This includes use of lunar scans to generate initial maps of influence regions, use of long path overlap ratios to explore sources of change and use of variations in bias calculated from truth sites to quantify influences from the surround on absolute bias errors.

  4. Inter-track interference mitigation with two-dimensional variable equalizer for bit patterned media recording

    NASA Astrophysics Data System (ADS)

    Wang, Yao; Vijaya Kumar, B. V. K.

    2017-05-01

    The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.

  5. FRA/Volpe Center trespass prevention research study

    DOT National Transportation Integrated Search

    2011-11-08

    Presentation on the Trespass Prevention Research Study. The project was initiated in August 2009 with the goal of eliminating trespasser incidents and fatalities and to provide national guidance on trespass mitigation. The city of West Palm Beach Flo...

  6. INITIATIVES AND TREATMENT OF MERCURY IN ABANDONED MINES

    EPA Science Inventory

    This presentation discusses EPA's research activities and mitigation activities for mercury contaminated mine sites at the International meeting on mercury and artisanal gold mining in Lima, Peru. The topics discussed included the toxicological and enviornmental tasks associated ...

  7. Mobile Source Mitigation Opportunities

    EPA Science Inventory

    For Frank Princiotta’s book, Global Climate Change—The Technology Challenge The objective of this chapter is to review this history, focusing initially on the historical growth patterns and the resulting environmental consequences; then on the current control efforts around the ...

  8. Medical Errors Reduction Initiative

    DTIC Science & Technology

    2009-03-01

    enough data was collected to have any statistical significance or determine impact on latent error in the process of blood transfusion. Bedside...of adverse drug events. JAMA 1995; 274: 35-43 . Leape, L.L., Brennan, T .A., & Laird, N .M. ( 1991) The nature of adverse events in hospitalized...Background Medical errors are a significant cause of morbidity and mortality among hospitalized patients (Kohn, Corrigan and Donaldson, 2000; Leape, Brennan

  9. Using Pipelined XNOR Logic to Reduce SEU Risks in State Machines

    NASA Technical Reports Server (NTRS)

    Le, Martin; Zheng, Xin; Katanyoutant, Sunant

    2008-01-01

    Single-event upsets (SEUs) pose great threats to avionic systems state machine control logic, which are frequently used to control sequence of events and to qualify protocols. The risks of SEUs manifest in two ways: (a) the state machine s state information is changed, causing the state machine to unexpectedly transition to another state; (b) due to the asynchronous nature of SEU, the state machine's state registers become metastable, consequently causing any combinational logic associated with the metastable registers to malfunction temporarily. Effect (a) can be mitigated with methods such as triplemodular redundancy (TMR). However, effect (b) cannot be eliminated and can degrade the effectiveness of any mitigation method of effect (a). Although there is no way to completely eliminate the risk of SEU-induced errors, the risk can be made very small by use of a combination of very fast state-machine logic and error-detection logic. Therefore, one goal of two main elements of the present method is to design the fastest state-machine logic circuitry by basing it on the fastest generic state-machine design, which is that of a one-hot state machine. The other of the two main design elements is to design fast error-detection logic circuitry and to optimize it for implementation in a field-programmable gate array (FPGA) architecture: In the resulting design, the one-hot state machine is fitted with a multiple-input XNOR gate for detection of illegal states. The XNOR gate is implemented with lookup tables and with pipelines for high speed. In this method, the task of designing all the logic must be performed manually because no currently available logic synthesis software tool can produce optimal solutions of design problems of this type. However, some assistance is provided by a script, written for this purpose in the Python language (an object-oriented interpretive computer language) to automatically generate hardware description language (HDL) code from state-transition rules.

  10. Assessment and mitigation of errors associated with a large-scale field investigation of methane emissions from the Marcellus Shale

    NASA Astrophysics Data System (ADS)

    Caulton, D.; Golston, L.; Li, Q.; Bou-Zeid, E.; Pan, D.; Lane, H.; Lu, J.; Fitts, J. P.; Zondlo, M. A.

    2015-12-01

    Recent work suggests the distribution of methane emissions from fracking operations is a skewed distributed with a small percentage of emitters contributing a large proportion of the total emissions. In order to provide a statistically robust distributions of emitters and determine the presence of super-emitters, errors in current techniques need to be constrained and mitigated. The Marcellus shale, the most productive natural gas shale field in the United States, has received less intense focus for well-level emissions and is here investigated to provide the distribution of methane emissions. In July of 2015 approximately 250 unique well pads were sampled using the Princeton Atmospheric Chemistry Mobile Acquisition Node (PAC-MAN). This mobile lab includes a Garmin GPS unit, Vaisala weather station (WTX520), LICOR 7700 CH4 open path sensor and LICOR 7500 CO2/H2O open path sensor. Sampling sites were preselected based on wind direction, sampling distance and elevation grade. All sites were sampled during low boundary layer conditions (600-1000 and 1800-2200 local time). The majority of sites were sampled 1-3 times while selected test sites were sampled multiple times or resampled several times during the day. For selected sites a sampling tower was constructed consisting of a Metek uSonic-3 Class A sonic anemometer, and an additional LICOR 7700 and 7500. Data were recorded for at least one hour at these sites. A robust study and inter-comparison of different methodologies will be presented. The Gaussian plume model will be used to calculate fluxes for all sites and compare results from test sites with multiple passes. Tower data is used to provide constraints on the Gaussian plume model. Additionally, Large Eddy Simulation (LES) modeling will be used to calculate emissions from the tower sites. Alternative techniques will also be discussed. Results from these techniques will be compared to identify best practices and provide robust error estimates.

  11. Task errors by emergency physicians are associated with interruptions, multitasking, fatigue and working memory capacity: a prospective, direct observation study.

    PubMed

    Westbrook, Johanna I; Raban, Magdalena Z; Walter, Scott R; Douglas, Heather

    2018-01-09

    Interruptions and multitasking have been demonstrated in experimental studies to reduce individuals' task performance. These behaviours are frequently used by clinicians in high-workload, dynamic clinical environments, yet their effects have rarely been studied. To assess the relative contributions of interruptions and multitasking by emergency physicians to prescribing errors. 36 emergency physicians were shadowed over 120 hours. All tasks, interruptions and instances of multitasking were recorded. Physicians' working memory capacity (WMC) and preference for multitasking were assessed using the Operation Span Task (OSPAN) and Inventory of Polychronic Values. Following observation, physicians were asked about their sleep in the previous 24 hours. Prescribing errors were used as a measure of task performance. We performed multivariate analysis of prescribing error rates to determine associations with interruptions and multitasking, also considering physician seniority, age, psychometric measures, workload and sleep. Physicians experienced 7.9 interruptions/hour. 28 clinicians were observed prescribing 239 medication orders which contained 208 prescribing errors. While prescribing, clinicians were interrupted 9.4 times/hour. Error rates increased significantly if physicians were interrupted (rate ratio (RR) 2.82; 95% CI 1.23 to 6.49) or multitasked (RR 1.86; 95% CI 1.35 to 2.56) while prescribing. Having below-average sleep showed a >15-fold increase in clinical error rate (RR 16.44; 95% CI 4.84 to 55.81). WMC was protective against errors; for every 10-point increase on the 75-point OSPAN, a 19% decrease in prescribing errors was observed. There was no effect of polychronicity, workload, physician gender or above-average sleep on error rates. Interruptions, multitasking and poor sleep were associated with significantly increased rates of prescribing errors among emergency physicians. WMC mitigated the negative influence of these factors to an extent. These results confirm experimental findings in other fields and raise questions about the acceptability of the high rates of multitasking and interruption in clinical environments. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.

    PubMed

    Chen, Jing; Zhang, Yi; Xue, Wei

    2018-04-28

    In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.

  13. A wedge strategy for mitigation of urban warming in future climate scenarios

    NASA Astrophysics Data System (ADS)

    Zhao, L.

    2016-12-01

    Heat stress is one of the most severe climate threats to the human society in a future warmer world. The situation is further compounded in urban areas by the urban heat island (UHI). Because the majority of the world's population is projected to live in cities, there is a pressing need to find effective solutions for the high temperature problem. It is now recognized that in addition to the traditional emphasis on preparedness to cope with heat stress, these solutions should include active modifications of urban land form to reduce urban temperatures. Here we use an urban climate model to investigate the effectiveness of these active methods in mitigating the urban heat, both individually and collectively. By adopting highly reflective roofs citywide, almost all the cities in the USA and in southern Canada are transformed into cold islands or "white oases" where the daytime surface temperatures are lower than those in the surrounding rural land. The average oasis effect is -3.4 ± 0.3 K (mean ± 1 standard error) for the period 2071-2100 under the RCP4.5 scenario. A UHI mitigation wedge strategy consisting of cool roof, street vegetation and reflective pavement has the potential to eliminate the daytime UHI plus the greenhouse gas induced warming.

  14. Damage-mitigating control of aircraft for high performance and life extension

    NASA Astrophysics Data System (ADS)

    Caplin, Jeffrey

    1998-12-01

    A methodology is proposed for the synthesis of a Damage-Mitigating Control System for a high-performance fighter aircraft. The design of such a controller involves consideration of damage to critical points of the structure, as well as the performance requirements of the aircraft. This research is interdisciplinary, and brings existing knowledge in the fields of unsteady aerodynamics, structural dynamics, fracture mechanics, and control theory together to formulate a new approach towards aircraft flight controller design. A flexible wing model is formulated using the Finite Element Method, and the important mode shapes and natural frequencies are identified. The Doublet Lattice Method is employed to develop an unsteady flow model for computation of the unsteady aerodynamic loads acting on the wing due to rigid-body maneuvers and structural deformation. These two models are subsequently incorporated into a pre-existing nonlinear rigid-body aircraft flight-dynamic model. A family of robust Damage-Mitigating Controllers is designed using the Hinfinity-optimization and mu-synthesis method. In addition to weighting the error between the ideal performance and the actual performance of the aircraft, weights are also placed on the strain amplitude at the root of each wing. The results show significant savings in fatigue life of the wings while retaining the dynamic performance of the aircraft.

  15. Inter-model analysis of tsunami-induced coastal currents

    NASA Astrophysics Data System (ADS)

    Lynett, Patrick J.; Gately, Kara; Wilson, Rick; Montoya, Luis; Arcas, Diego; Aytore, Betul; Bai, Yefei; Bricker, Jeremy D.; Castro, Manuel J.; Cheung, Kwok Fai; David, C. Gabriel; Dogan, Gozde Guney; Escalante, Cipriano; González-Vida, José Manuel; Grilli, Stephan T.; Heitmann, Troy W.; Horrillo, Juan; Kânoğlu, Utku; Kian, Rozita; Kirby, James T.; Li, Wenwen; Macías, Jorge; Nicolsky, Dmitry J.; Ortega, Sergio; Pampell-Manis, Alyssa; Park, Yong Sung; Roeber, Volker; Sharghivand, Naeimeh; Shelby, Michael; Shi, Fengyan; Tehranirad, Babak; Tolkova, Elena; Thio, Hong Kie; Velioğlu, Deniz; Yalçıner, Ahmet Cevdet; Yamazaki, Yoshiki; Zaytsev, Andrey; Zhang, Y. J.

    2017-06-01

    To help produce accurate and consistent maritime hazard products, the National Tsunami Hazard Mitigation Program organized a benchmarking workshop to evaluate the numerical modeling of tsunami currents. Thirteen teams of international researchers, using a set of tsunami models currently utilized for hazard mitigation studies, presented results for a series of benchmarking problems; these results are summarized in this paper. Comparisons focus on physical situations where the currents are shear and separation driven, and are thus de-coupled from the incident tsunami waveform. In general, we find that models of increasing physical complexity provide better accuracy, and that low-order three-dimensional models are superior to high-order two-dimensional models. Inside separation zones and in areas strongly affected by eddies, the magnitude of both model-data errors and inter-model differences can be the same as the magnitude of the mean flow. Thus, we make arguments for the need of an ensemble modeling approach for areas affected by large-scale turbulent eddies, where deterministic simulation may be misleading. As a result of the analyses presented herein, we expect that tsunami modelers now have a better awareness of their ability to accurately capture the physics of tsunami currents, and therefore a better understanding of how to use these simulation tools for hazard assessment and mitigation efforts.

  16. Robust Damage-Mitigating Control of Aircraft for High Performance and Structural Durability

    NASA Technical Reports Server (NTRS)

    Caplin, Jeffrey; Ray, Asok; Joshi, Suresh M.

    1999-01-01

    This paper presents the concept and a design methodology for robust damage-mitigating control (DMC) of aircraft. The goal of DMC is to simultaneously achieve high performance and structural durability. The controller design procedure involves consideration of damage at critical points of the structure, as well as the performance requirements of the aircraft. An aeroelastic model of the wings has been formulated and is incorporated into a nonlinear rigid-body model of aircraft flight-dynamics. Robust damage-mitigating controllers are then designed using the H(infinity)-based structured singular value (mu) synthesis method based on a linearized model of the aircraft. In addition to penalizing the error between the ideal performance and the actual performance of the aircraft, frequency-dependent weights are placed on the strain amplitude at the root of each wing. Using each controller in turn, the control system is put through an identical sequence of maneuvers, and the resulting (varying amplitude cyclic) stress profiles are analyzed using a fatigue crack growth model that incorporates the effects of stress overload. Comparisons are made to determine the impact of different weights on the resulting fatigue crack damage in the wings. The results of simulation experiments show significant savings in fatigue life of the wings while retaining the dynamic performance of the aircraft.

  17. Policy intervention for arsenic mitigation in drinking water in rural habitations in India: achievements and challenges.

    PubMed

    Shrivastava, Brajesh K

    2016-10-01

    This article provides updated status of the arsenic affected rural habitations in India, summarizes the policy initiatives of the Ministry of Drinking Water & Sanitation (Government of India), reviews the technologies for arsenic treatment and analyses the progress made by states in tackling arsenic problems in rural habitations. It also provides a list of constraints based on experiences and recommends suggested measures to tackle arsenic problems in an holistic manner. It is expected that the paper would be useful for policy formulators in states, non-government organizations, researchers of academic and scientific institutions and programme managers working in the area of arsenic mitigation in drinking water, especially in developing countries, as it provides better insights compared to other available information in India on mitigating arsenic problems in drinking water in rural areas.

  18. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    NASA Astrophysics Data System (ADS)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbee, D; McCarthy, A; Galavis, P

    Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less

  20. Structural interpretation in composite systems using powder X-ray diffraction: applications of error propagation to the pair distribution function.

    PubMed

    Moore, Michael D; Shi, Zhenqi; Wildfong, Peter L D

    2010-12-01

    To develop a method for drawing statistical inferences from differences between multiple experimental pair distribution function (PDF) transforms of powder X-ray diffraction (PXRD) data. The appropriate treatment of initial PXRD error estimates using traditional error propagation algorithms was tested using Monte Carlo simulations on amorphous ketoconazole. An amorphous felodipine:polyvinyl pyrrolidone:vinyl acetate (PVPva) physical mixture was prepared to define an error threshold. Co-solidified products of felodipine:PVPva and terfenadine:PVPva were prepared using a melt-quench method and subsequently analyzed using PXRD and PDF. Differential scanning calorimetry (DSC) was used as an additional characterization method. The appropriate manipulation of initial PXRD error estimates through the PDF transform were confirmed using the Monte Carlo simulations for amorphous ketoconazole. The felodipine:PVPva physical mixture PDF analysis determined ±3σ to be an appropriate error threshold. Using the PDF and error propagation principles, the felodipine:PVPva co-solidified product was determined to be completely miscible, and the terfenadine:PVPva co-solidified product, although having appearances of an amorphous molecular solid dispersion by DSC, was determined to be phase-separated. Statistically based inferences were successfully drawn from PDF transforms of PXRD patterns obtained from composite systems. The principles applied herein may be universally adapted to many different systems and provide a fundamentally sound basis for drawing structural conclusions from PDF studies.

Top