Sample records for error protection uep

  1. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    NASA Astrophysics Data System (ADS)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  2. Adaptive UEP and Packet Size Assignment for Scalable Video Transmission over Burst-Error Channels

    NASA Astrophysics Data System (ADS)

    Lee, Chen-Wei; Yang, Chu-Sing; Su, Yih-Ching

    2006-12-01

    This work proposes an adaptive unequal error protection (UEP) and packet size assignment scheme for scalable video transmission over a burst-error channel. An analytic model is developed to evaluate the impact of channel bit error rate on the quality of streaming scalable video. A video transmission scheme, which combines the adaptive assignment of packet size with unequal error protection to increase the end-to-end video quality, is proposed. Several distinct scalable video transmission schemes over burst-error channel have been compared, and the simulation results reveal that the proposed transmission schemes can react to varying channel conditions with less and smoother quality degradation.

  3. On codes with multi-level error-correction capabilities

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1987-01-01

    In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.

  4. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  5. Performance Analysis of Direct-Sequence Code-Division Multiple-Access Communications with Asymmetric Quadrature Phase-Shift-Keying Modulation

    NASA Technical Reports Server (NTRS)

    Wang, C.-W.; Stark, W.

    2005-01-01

    This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.

  6. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1994-01-01

    The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.

  7. A human reliability based usability evaluation method for safety-critical software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, R. L.; Tran, T. Q.; Gertman, D. I.

    2006-07-01

    Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thusmore » allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)« less

  8. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  9. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  10. Advancing Usability Evaluation through Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; David I. Gertman

    2005-07-01

    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less

  11. Assurance of energy efficiency and data security for ECG transmission in BASNs.

    PubMed

    Ma, Tao; Shrestha, Pradhumna Lal; Hempel, Michael; Peng, Dongming; Sharif, Hamid; Chen, Hsiao-Hwa

    2012-04-01

    With the technological advancement in body area sensor networks (BASNs), low cost high quality electrocardiographic (ECG) diagnosis systems have become important equipment for healthcare service providers. However, energy consumption and data security with ECG systems in BASNs are still two major challenges to tackle. In this study, we investigate the properties of compressed ECG data for energy saving as an effort to devise a selective encryption mechanism and a two-rate unequal error protection (UEP) scheme. The proposed selective encryption mechanism provides a simple and yet effective security solution for an ECG sensor-based communication platform, where only one percent of data is encrypted without compromising ECG data security. This part of the encrypted data is essential to ECG data quality due to its unequally important contribution to distortion reduction. The two-rate UEP scheme achieves a significant additional energy saving due to its unequal investment of communication energy to the outcomes of the selective encryption, and thus, it maintains a high ECG data transmission quality. Our results show the improvements in communication energy saving of about 40%, and demonstrate a higher transmission quality and security measured in terms of wavelet-based weighted percent root-mean-squared difference.

  12. Isolation and characterization of an ubiquitin extension protein gene (JcUEP) promoter from Jatropha curcas.

    PubMed

    Tao, Yan-Bin; He, Liang-Liang; Niu, Long-Jian; Xu, Zeng-Fu

    2015-04-01

    The JcUEP promoter is active constitutively in the bio-fuel plant Jatropha curcas , and is an alternative to the widely used CaMV35S promoter for driving constitutive overexpression of transgenes in Jatropha. Well-characterized promoters are required for transgenic breeding of Jatropha curcas, a biofuel feedstock with great potential for production of bio-diesel and bio-jet fuel. In this study, an ubiquitin extension protein gene from Jatropha, designated JcUEP, was identified to be ubiquitously expressed. Thus, we isolated a 1.2 kb fragment of the 5' flanking region of JcUEP and evaluated its activity as a constitutive promoter in Arabidopsis and Jatropha using the β-glucuronidase (GUS) reporter gene. As expected, histochemical GUS assay showed that the JcUEP promoter was active in all Arabidopsis and Jatropha tissues tested. We also compared the activity of the JcUEP promoter with that of the cauliflower mosaic virus 35S (CaMV35S) promoter, a well-characterized constitutive promoter conferring strong transgene expression in dicot species, in various tissues of Jatropha. In a fluorometric GUS assay, the two promoters showed similar activities in stems, mature leaves and female flowers; while the CaMV35S promoter was more effective than the JcUEP promoter in other tissues, especially young leaves and inflorescences. In addition, the JcUEP promoter retained its activity under stress conditions in low temperature, high salt, dehydration and exogenous ABA treatments. These results suggest that the plant-derived JcUEP promoter could be an alternative to the CaMV35S promoter for driving constitutive overexpression of transgenes in Jatropha and other plants.

  13. Cross-Layer Design for Video Transmission over Wireless Rician Slow-Fading Channels Using an Adaptive Multiresolution Modulation and Coding Scheme

    NASA Astrophysics Data System (ADS)

    Pei, Yong; Modestino, James W.

    2007-12-01

    We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.

  14. Source-Adaptation-Based Wireless Video Transport: A Cross-Layer Approach

    NASA Astrophysics Data System (ADS)

    Qu, Qi; Pei, Yong; Modestino, James W.; Tian, Xusheng

    2006-12-01

    Real-time packet video transmission over wireless networks is expected to experience bursty packet losses that can cause substantial degradation to the transmitted video quality. In wireless networks, channel state information is hard to obtain in a reliable and timely manner due to the rapid change of wireless environments. However, the source motion information is always available and can be obtained easily and accurately from video sequences. Therefore, in this paper, we propose a novel cross-layer framework that exploits only the motion information inherent in video sequences and efficiently combines a packetization scheme, a cross-layer forward error correction (FEC)-based unequal error protection (UEP) scheme, an intracoding rate selection scheme as well as a novel intraframe interleaving scheme. Our objective and subjective results demonstrate that the proposed approach is very effective in dealing with the bursty packet losses occurring on wireless networks without incurring any additional implementation complexity or delay. Thus, the simplicity of our proposed system has important implications for the implementation of a practical real-time video transmission system.

  15. Effect of hyperinflation on inspiratory function of the diaphragm.

    PubMed

    Minh, V D; Dolan, G F; Konopka, R F; Moser, K M

    1976-01-01

    The inspiratory efficiency of the diaphragm during unilateral and bilateral phrenic stimulation (UEPS and BEPS) with constant stimulus was studied in seven dogs from FRC to 120% TLC. Alveolar pressures (PAl) were recorded during relaxation, BEPS and UEPS at each lung volume in the closed respiratory system. From the PAl-lung volume curves, tidal volume (VT), and pressure developed by the diaphragm (Pmus) were derived. Results are summarized below. a) Hyperinflation impaired the inspiratory efficiency of the diaphragm which behaved as an expiratory muscle beyond the lung volume of 103.7% TLC (Vinef). b) The diaphragm during UEPS became expiratory at the same Vinef as during (BEPS. C) The VT-lung volume relationship was linear during BEPS, allowing simple quantitation of VT loss with hyperinflation and prediction of Vinef. d) With only one phrenic nerve stimulated, the functional loss is less pronounced in VT than in Pmus, as compared to BEPS, indicating that the respiratory system was more compliant during UEPS than BEPS. This compliance difference from UEPS to BEPS diminished with severe hyperinflation.

  16. Prioritized LT Codes

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode" but also "what to encode" to achieve UEP. Another advantage of the priority encoding process is that the majority of high-priority data can be decoded sooner since only a small number of code symbols are required to reconstruct high-priority data. This approach increases the likelihood that high-priority data is decoded first over low-priority data. The Prioritized LT code scheme achieves an improvement in high-priority data decoding performance as well as overall information recovery without penalizing the decoding of low-priority data, assuming high-priority data is no more than half of a message block. The cost is in the additional complexity required in the encoder. If extra computation resource is available at the transmitter, image, voice, and video transmission quality in terrestrial and space communications can benefit from accurate use of redundancy in protecting data with varying priorities.

  17. Unequal error control scheme for dimmable visible light communication systems

    NASA Astrophysics Data System (ADS)

    Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan

    2017-01-01

    Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.

  18. Two-Level Scheduling for Video Transmission over Downlink OFDMA Networks

    PubMed Central

    Tham, Mau-Luen

    2016-01-01

    This paper presents a two-level scheduling scheme for video transmission over downlink orthogonal frequency-division multiple access (OFDMA) networks. It aims to maximize the aggregate quality of the video users subject to the playback delay and resource constraints, by exploiting the multiuser diversity and the video characteristics. The upper level schedules the transmission of video packets among multiple users based on an overall target bit-error-rate (BER), the importance level of packet and resource consumption efficiency factor. Instead, the lower level renders unequal error protection (UEP) in terms of target BER among the scheduled packets by solving a weighted sum distortion minimization problem, where each user weight reflects the total importance level of the packets that has been scheduled for that user. Frequency-selective power is then water-filled over all the assigned subcarriers in order to leverage the potential channel coding gain. Realistic simulation results demonstrate that the proposed scheme significantly outperforms the state-of-the-art scheduling scheme by up to 6.8 dB in terms of peak-signal-to-noise-ratio (PSNR). Further test evaluates the suitability of equal power allocation which is the common assumption in the literature. PMID:26906398

  19. Quantization of high dimensional Gaussian vector using permutation modulation with application to information reconciliation in continuous variable QKD

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.

  20. Toward resolution of the debate regarding purported crypto-Jews in a Spanish-American population: evidence from the Y chromosome.

    PubMed

    Sutton, Wesley K; Knight, Alec; Underhill, Peter A; Neulander, Judith S; Disotell, Todd R; Mountain, Joanna L

    2006-01-01

    The ethnic heritage of northernmost New Spain, including present-day northern New Mexico and southernmost Colorado, USA, is intensely debated. Local Spanish-American folkways and anecdotal narratives led to claims that the region was colonized primarily by secret- or crypto-Jews. Despite ethnographic criticisms, the notion of substantial crypto-Jewish ancestry among Spanish-Americans persists. We tested the null hypothesis that Spanish-Americans of northern New Mexico carry essentially the same profile of paternally inherited DNA variation as the peoples of Iberia, and the relevant alternative hypothesis that the sampled Spanish-Americans possess inherited DNA variation that reflects Jewish ancestry significantly greater than that in present-day Iberia. We report frequencies of 19 Y-chromosome unique event polymorphism (UEP) biallelic markers for 139 men from across northern New Mexico and southern Colorado, USA, who self-identify as 'Spanish-American'. We used three different statistical tests of differentiation to compare frequencies of major UEP-defined clades or haplogroups with published data for Iberians, Jews, and other Mediterranean populations. We also report frequencies of derived UEP markers within each major haplogroup, compared with published data for relevant populations. All tests of differentiation showed that, for frequencies of the major UEP-defined clades, Spanish-Americans and Iberians are statistically indistinguishable. All other pairwise comparisons, including between Spanish-Americans and Jews, and Iberians and Jews, revealed highly significant differences in UEP frequencies. Our results indicate that paternal genetic inheritance of Spanish-Americans is indistinguishable from that of Iberians and refute the popular and widely publicized scenario of significant crypto-Jewish ancestry of the Spanish-American population.

  1. Quality optimization of H.264/AVC video transmission over noisy environments using a sparse regression framework

    NASA Astrophysics Data System (ADS)

    Pandremmenou, K.; Tziortziotis, N.; Paluri, S.; Zhang, W.; Blekas, K.; Kondi, L. P.; Kumar, S.

    2015-03-01

    We propose the use of the Least Absolute Shrinkage and Selection Operator (LASSO) regression method in order to predict the Cumulative Mean Squared Error (CMSE), incurred by the loss of individual slices in video transmission. We extract a number of quality-relevant features from the H.264/AVC video sequences, which are given as input to the LASSO. This method has the benefit of not only keeping a subset of the features that have the strongest effects towards video quality, but also produces accurate CMSE predictions. Particularly, we study the LASSO regression through two different architectures; the Global LASSO (G.LASSO) and Local LASSO (L.LASSO). In G.LASSO, a single regression model is trained for all slice types together, while in L.LASSO, motivated by the fact that the values for some features are closely dependent on the considered slice type, each slice type has its own regression model, in an e ort to improve LASSO's prediction capability. Based on the predicted CMSE values, we group the video slices into four priority classes. Additionally, we consider a video transmission scenario over a noisy channel, where Unequal Error Protection (UEP) is applied to all prioritized slices. The provided results demonstrate the efficiency of LASSO in estimating CMSE with high accuracy, using only a few features. les that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a le system, user interface and applications through an web architecture.

  2. Genetic homogeneity across Bantu-speaking groups from Mozambique and Angola challenges early split scenarios between East and West Bantu populations.

    PubMed

    Alves, Isabel; Coelho, Margarida; Gignoux, Christopher; Damasceno, Albertino; Prista, Antonio; Rocha, Jorge

    2011-02-01

    The large scale spread of Bantu-speaking populations remains one of the most debated questions in African population history. In this work we studied the genetic structure of 19 Bantu-speaking groups from Mozambique and Angola using a multilocus approach based on 14 newly developed compound haplotype systems (UEPSTRs), each consisting of a rapidly evolving short tandem repeat (STR) closely linked to a unique event polymorphism (UEP). We compared the ability of UEPs, STRs and UEPSTRs to document genetic variation at the intercontinental level and among the African Bantu populations, and found that UEPSTR systems clearly provided more resolution than UEPs or STRs alone. The observed patterns of genetic variation revealed high levels of genetic homogeneity between major populations from Angola and Mozambique, with two main outliers: the Kuvale from Angola and the Chopi from Mozambique. Within Mozambique, two Kaskazi-speaking populations from the far north (Yao and Mwani) and two Nyasa-speaking groups from the Zambezi River basin (Nyungwe and Sena) could be differentiated from the remaining groups, but no further population structure was observed across the country. The close genetic relationship between most sampled Bantu populations is consistent with high degrees of interaction between peoples living in savanna areas located to the south of the rainforest. Our results highlight the role of gene flow during the Bantu expansions and show that the genetic evidence accumulated so far is becoming increasingly difficult to reconcile with widely accepted models postulating an early split between eastern and western Bantu populations.

  3. Quantifying the potential export flows of used electronic products in Macau: a case study of PCs.

    PubMed

    Yu, Danfeng; Song, Qingbin; Wang, Zhishi; Li, Jinhui; Duan, Huabo; Wang, Jinben; Wang, Chao; Wang, Xu

    2017-12-01

    The used electronic product (UEP) has attracted the worldwide attentions because part of e-waste may be exported from developed countries to developing countries in the name of UEP. On the basis of large foreign trade data of electronic products (e-products), this study adopted the trade data approach (TDA) to quantify the potential exports of UEP in Macau, taking a case study of personal computers (PCs). The results show that the desktop mainframes, LCD monitors, and CRT monitors have more low-unit-value trades with higher trade volumes in the past 10 years, while the laptop and tablet PCs, as the newer technologies, owned the higher ratios of the high-unit-value trades. During the period of 2005-2015, the total mean exports for used laptop and tablet PCs, desktop mainframes, and LCD monitors were approximately 18,592, 79,957, and 43,177 units, respectively, while the possible export volume of used CRT monitors was higher, up to 430,098 units in 2000-2010. Noticed that these potential export volumes could be the lower bound because not all used PCs may be shipped using the PC trade code. For all the four kinds of used PCs, the majority (61.6-98.82%) of the export volumes have gone to Hong Kong, followed by Mainland China and Taiwan. Since 2011, there was no CRT monitor export; however, the other kinds of used PC exports will still exist in Macau in the future. The outcomes are helpful to understand and manage the current export situations of used products in Macau, and can also provide a reference for other countries and regions.

  4. Design of Intelligent Cross-Layer Routing Protocols for Airborne Wireless Networks Under Dynamic Spectrum Access Paradigm

    DTIC Science & Technology

    2011-05-01

    rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The

  5. 78 FR 46908 - Announcement of Grant Application Deadlines and Funding Levels for the Assistance to High Energy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... materials. DATES: You may submit completed grant applications on paper or electronically according to the... materials may be obtained electronically through: http://www.rurdev.usda.gov/UEP_Our_Grant_Programs.html... other materials. FOR FURTHER INFORMATION CONTACT: Kristi Kubista-Hovis, Senior Policy Advisor, Rural...

  6. Consulting Stakeholders in the Development of an Environmental Policy Implementation Plan: A Delphi Study at Dalhousie University

    ERIC Educational Resources Information Center

    Wright, Tarah Sharon Alexandra

    2004-01-01

    This paper reports on a Delphi Study undertaken at Dalhousie University in which a multi-stakeholder panel was consulted in order to generate ideas that could be incorporated into an Implementation Plan for the University Environmental Policy (UEP). The objectives of the study were twofold. First, the study endeavored to develop ideas as to the…

  7. Detection Rate and Sweep Width in Visual Search

    DTIC Science & Technology

    1979-11-01

    Assessment of Miliftery Ceers- See~൶ p1, Fab 1973, published In Whoe Pro. 8in 1et-erA In the USSR," 13 pp., Uep 1073. pi "w**eOn,, ohn s Laweene Hirsi led...Sep 1077, 67 pp., AD A045 676 1077, AD A043 50 99161pp 202 bpp t Stpe1.ad tl91ae. A netr PP 100 Feldman. Poel, ’Why Regulation Dool"’t work." %Wt.s~n

  8. Autonomous frequency domain identification: Theory and experiment

    NASA Technical Reports Server (NTRS)

    Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.

    1989-01-01

    The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.

  9. Error protection capability of space shuttle data bus designs

    NASA Technical Reports Server (NTRS)

    Proch, G. E.

    1974-01-01

    Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.

  10. Effects of errors and gaps in spatial data sets on assessment of conservation progress.

    PubMed

    Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C

    2013-10-01

    Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.

  11. The selective power of causality on memory errors.

    PubMed

    Marsh, Jessecae K; Kulkofsky, Sarah

    2015-01-01

    We tested the influence of causal links on the production of memory errors in a misinformation paradigm. Participants studied a set of statements about a person, which were presented as either individual statements or pairs of causally linked statements. Participants were then provided with causally plausible and causally implausible misinformation. We hypothesised that studying information connected with causal links would promote representing information in a more abstract manner. As such, we predicted that causal information would not provide an overall protection against memory errors, but rather would preferentially help in the rejection of misinformation that was causally implausible, given the learned causal links. In two experiments, we measured whether the causal linkage of information would be generally protective against all memory errors or only selectively protective against certain types of memory errors. Causal links helped participants reject implausible memory lures, but did not protect against plausible lures. Our results suggest that causal information may promote an abstract storage of information that helps prevent only specific types of memory errors.

  12. Combined group ECC protection and subgroup parity protection

    DOEpatents

    Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin

    2013-06-18

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.

  13. Processor register error correction management

    DOEpatents

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  14. Combined group ECC protection and subgroup parity protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Cheng, Dong; Heidelberger, Philip

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less

  15. Three Essays In and Tests of Theoretical Urban Economics

    NASA Astrophysics Data System (ADS)

    Zhao, Weihua

    This dissertation consists of three essays on urban economics. The three essays are related to urban spatial structure change, energy consumption, greenhouse gas emissions, and housing redevelopment. Chapter 1 answers the question: Does the classic Standard Urban Model still describe the growth of cities? Chapter 2 derives the implications of telework on urban spatial structure, energy consumption, and greenhouse gas emissions. Chapter 3 investigates the long run effects of minimum lot size zoning on neighborhood redevelopment. Chapter 1 identifies a new implication of the classic Standard Urban Model, the "unitary elasticity property (UEP)", which is the sum of the elasticity of central density and the elasticity of land area with respect to population change is approximately equal to unity. When this implication of the SUM is tested, it fits US cities fairly well. Further analysis demonstrates that topographic barriers and age of housing stock are the key factors explaining deviation from the UEP. Chapter 2 develops a numerical urban simulation model with households that are able to telework to investigate the urban form, congestion, energy consumption and greenhouse gas emission implications of telework. Simulation results suggest that by reducing transportation costs, telework causes sprawl, with associated longer commutes and consumption of larger homes, both of which increase energy consumption. Overall effects depend on who captures the gains from telework (workers versus firms), urban land use regulation such as height limits or greenbelts, and the fraction of workers participating in telework. The net effects of telework on energy use and GHG emissions are generally negligible. Chapter 3 applies dynamic programming to investigate the long run effects of minimum lot size zoning on neighborhood redevelopment. With numerical simulation, comparative dynamic results show that minimum lot size zoning can delay initial land conversion and slow down demolition and housing redevelopment. Initially, minimum lot size zoning is not binding. However, as city grows, it becomes binding and can effectively distort housing supply. It can lower both floor area ratio and residential density, and reduce aggregate housing supply. Overall, minimum lot size zoning can stabilize the path of structure/land ratios, housing service levels, structure density, and housing prices. In addition, minimum lot size zoning provides more incentive for developer to maintain the building, slow structure deterioration, and raise the minimum level of housing services provided over the life cycle of development.

  16. RD Optimized, Adaptive, Error-Resilient Transmission of MJPEG2000-Coded Video over Multiple Time-Varying Channels

    NASA Astrophysics Data System (ADS)

    Bezan, Scott; Shirani, Shahram

    2006-12-01

    To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.

  17. Physical implementation of protected qubits

    NASA Astrophysics Data System (ADS)

    Douçot, B.; Ioffe, L. B.

    2012-07-01

    We review the general notion of topological protection of quantum states in spin models and its relation with the ideas of quantum error correction. We show that topological protection can be viewed as a Hamiltonian realization of error correction: for a quantum code for which the minimal number of errors that remain undetected is N, the corresponding Hamiltonian model of the effects of the environment noise appears only in the Nth order of the perturbation theory. We discuss the simplest model Hamiltonians that realize topological protection and their implementation in superconducting arrays. We focus on two dual realizations: in one the protected state is stored in the parity of the Cooper pair number, in the other, in the parity of the flux number. In both cases the superconducting arrays allow a number of fault-tolerant operations that should make the universal quantum computation possible.

  18. A real-time analysis of parent-child emotion discussions: the interaction is reciprocal.

    PubMed

    Morelen, Diana; Suveg, Cynthia

    2012-12-01

    The current study examined reciprocal parent-child emotion-related behaviors and links to child emotional and psychological functioning. Fifty-four mothers, fathers, and children (7 to 12 years old) participated in four emotion discussions about a time when the child felt angry, happy, sad, and anxious. Supportive emotion parenting (SEP), unsupportive emotion parenting (UEP), and child adaptive/maladaptive emotion regulation (ER) behaviors were coded using Noldus behavioral research software (Noldus Information Technology, 2007). Parents were more likely to follow children's adaptive emotion regulation with supportive versus unsupportive emotional responses and children were more likely to show adaptive versus maladaptive emotion regulation in response to supportive emotion parenting. Interaction patterns involving unsupportive emotion parenting related to child psychological and emotional outcomes. The results provide empirical support for an evocative person-environment framework of emotion socialization and identify the ways in which particular patterns of interaction relate to psychological functioning in youth. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  19. Towards fault tolerant adiabatic quantum computation.

    PubMed

    Lidar, Daniel A

    2008-04-25

    I show how to protect adiabatic quantum computation (AQC) against decoherence and certain control errors, using a hybrid methodology involving dynamical decoupling, subsystem and stabilizer codes, and energy gaps. Corresponding error bounds are derived. As an example, I show how to perform decoherence-protected AQC against local noise using at most two-body interactions.

  20. Patient Safety and Quality Improvement Act of 2005.

    PubMed

    Fassett, William E

    2006-05-01

    To review Public Law (PL) 109-41-the Patient Safety and Quality Improvement Act of 2005 (PSQIA)-and summarize key medication error research that contributed to congressional recognition of the need for this legislation. Relevant publications related to medication error research, patient safety programs, and the legislative history of and commentary on PL 109-41, published in English, were identified by MEDLINE, PREMEDLINE, Thomas (Library of Congress), and Internet search engine-assisted searches using the terms healthcare quality, medication error, patient safety, PL 109-41, and quality improvement. Additional citations were identified from references cited in related publications. All relevant publications were reviewed. Summarization of the PSQIA was carried out by legal textual analysis. PL 109-41 provides privilege and confidentiality for patient safety work product (PSWP) developed for reporting to patient safety organizations (PSOs). It does not establish federal mandatory reporting of significant errors; rather, it relies on existing state reporting systems. The Act does not preempt stronger state protections for PSWP. The Agency for Healthcare Research and Quality is directed to certify PSOs and promote the establishment of a national network of patient safety databases. Whistleblower protection and penalties for unauthorized disclosure of PSWP are among its enforcement mechanisms. The Act protects clinicians who report minor errors to PSOs and protects the information from disclosure, but providers must increasingly embrace a culture of interdisciplinary concern for patient safety if this protection is to have real impact on patient care.

  1. Fatigue proofing: The role of protective behaviours in mediating fatigue-related risk in a defence aviation environment.

    PubMed

    Dawson, Drew; Cleggett, Courtney; Thompson, Kirrilly; Thomas, Matthew J W

    2017-02-01

    In the military or emergency services, operational requirements and/or community expectations often preclude formal prescriptive working time arrangements as a practical means of reducing fatigue-related risk. In these environments, workers sometimes employ adaptive or protective behaviours informally to reduce the risk (i.e. likelihood or consequence) associated with a fatigue-related error. These informal behaviours enable employees to reduce risk while continuing to work while fatigued. In this study, we documented the use of informal protective behaviours in a group of defence aviation personnel including flight crews. Semi-structured interviews were conducted to determine whether and which protective behaviours were used to mitigate fatigue-related error. The 18 participants were from aviation-specific trades and included aircrew (pilots and air-crewman) and aviation maintenance personnel (aeronautical engineers and maintenance personnel). Participants identified 147 ways in which they and/or others act to reduce the likelihood or consequence of a fatigue-related error. These formed seven categories of fatigue-reduction strategies. The two most novel categories are discussed in this paper: task-related and behaviour-based strategies. Broadly speaking, these results indicate that fatigued military flight and maintenance crews use protective 'fatigue-proofing' behaviours to reduce the likelihood and/or consequence of fatigue-related error and were aware of the potential benefits. It is also important to note that these behaviours are not typically part of the formal safety management system. Rather, they have evolved spontaneously as part of the culture around protecting team performance under adverse operating conditions. When compared with previous similar studies, aviation personnel were more readily able to understand the idea of fatigue proofing than those from a fire-fighting background. These differences were thought to reflect different cultural attitudes toward error and formal training using principles of Crew Resource Management and Threat and Error Management. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. An Alternative Time Metric to Modified Tau for Unmanned Aircraft System Detect And Avoid

    NASA Technical Reports Server (NTRS)

    Wu, Minghong G.; Bageshwar, Vibhor L.; Euteneuer, Eric A.

    2017-01-01

    A new horizontal time metric, Time to Protected Zone, is proposed for use in the Detect and Avoid (DAA) Systems equipped by unmanned aircraft systems (UAS). This time metric has three advantages over the currently adopted time metric, modified tau: it corresponds to a physical event, it is linear with time, and it can be directly used to prioritize intruding aircraft. The protected zone defines an area around the UAS that can be a function of each intruding aircraft's surveillance measurement errors. Even with its advantages, the Time to Protected Zone depends explicitly on encounter geometry and may be more sensitive to surveillance sensor errors than modified tau. To quantify its sensitivity, simulation of 972 encounters using realistic sensor models and a proprietary fusion tracker is performed. Two sensitivity metrics, the probability of time reversal and the average absolute time error, are computed for both the Time to Protected Zone and modified tau. Results show that the sensitivity of the Time to Protected Zone is comparable to that of modified tau if the dimensions of the protected zone are adequately defined.

  3. Analysis of quantum error correction with symmetric hypergraph states

    NASA Astrophysics Data System (ADS)

    Wagner, T.; Kampermann, H.; Bruß, D.

    2018-03-01

    Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.

  4. Fault-Tolerant Signal Processing Architectures with Distributed Error Control.

    DTIC Science & Technology

    1985-01-01

    Zm, Revisited," Information and Control, Vol. 37, pp. 100-104, 1978. 13. J. Wakerly , Error Detecting Codes. SeIf-Checkino Circuits and Applications ...However, the newer results concerning applications of real codes are still in the publication process. Hence, two very detailed appendices are included to...significant entities to be protected. While the distributed finite field approach afforded adequate protection, its applicability was restricted and

  5. Dissipative quantum error correction and application to quantum sensing with trapped ions.

    PubMed

    Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A

    2017-11-28

    Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  6. 78 FR 19981 - Special Conditions: Embraer S.A., Model EMB-550 Airplanes; Flight Envelope Protection: High Speed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-03

    ...; Flight Envelope Protection: High Speed Limiting AGENCY: Federal Aviation Administration (FAA), DOT... protection: high speed limiting. As published, the document contained an error in that the Special Conditions...

  7. Interactive Video Coding and Transmission over Heterogeneous Wired-to-Wireless IP Networks Using an Edge Proxy

    NASA Astrophysics Data System (ADS)

    Pei, Yong; Modestino, James W.

    2004-12-01

    Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.

  8. Personal protective equipment for the Ebola virus disease: A comparison of 2 training programs.

    PubMed

    Casalino, Enrique; Astocondor, Eugenio; Sanchez, Juan Carlos; Díaz-Santana, David Enrique; Del Aguila, Carlos; Carrillo, Juan Pablo

    2015-12-01

    Personal protective equipment (PPE) for preventing Ebola virus disease (EVD) includes basic PPE (B-PPE) and enhanced PPE (E-PPE). Our aim was to compare conventional training programs (CTPs) and reinforced training programs (RTPs) on the use of B-PPE and E-PPE. Four groups were created, designated CTP-B, CTP-E, RTP-B, and RTP-E. All groups received the same theoretical training, followed by 3 practical training sessions. A total of 120 students were included (30 per group). In all 4 groups, the frequency and number of total errors and critical errors decreased significantly over the course of the training sessions (P < .01). The RTP was associated with a greater reduction in the number of total errors and critical errors (P < .0001). During the third training session, we noted an error frequency of 7%-43%, a critical error frequency of 3%-40%, 0.3-1.5 total errors, and 0.1-0.8 critical errors per student. The B-PPE groups had the fewest errors and critical errors (P < .0001). Our results indicate that both training methods improved the student's proficiency, that B-PPE appears to be easier to use than E-PPE, that the RTP achieved better proficiency for both PPE types, and that a number of students are still potentially at risk for EVD contamination despite the improvements observed during the training. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  9. 77 FR 41699 - Transportation of Household Goods in Interstate Commerce; Consumer Protection Regulations...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-16

    ... due Revision due to agency Collection Old burden to error error (old-- error) IC1: ``Ready to Move... Revisions of Estimates of Annual Costs to Respondents Total cost Collection New cost Old cost reduction (new--old) IC1: ``Ready to Move?'' $288,000 $720,000 -$432,000 ``Rights & Responsibilities'' 3,264,000 8,160...

  10. Human Factors Risk Analyses of a Doffing Protocol for Ebola-Level Personal Protective Equipment: Mapping Errors to Contamination.

    PubMed

    Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T

    2018-03-05

    Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.

  11. 40 CFR 257.25 - Assessment monitoring program.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Assessment monitoring program. 257.25 Section 257.25 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA... unit caused the contamination, or that the statistically significant increase resulted from error in...

  12. Analysis of Position Error Headway Protection

    DOT National Transportation Integrated Search

    1975-07-01

    An analysis is developed to determine safe headway on PRT systems that use point-follower control. Periodic measurements of the position error relative to a nominal trajectory provide warning against the hazards of overspeed and unexpected stop. A co...

  13. Fault-tolerant quantum error detection.

    PubMed

    Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher

    2017-10-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.

  14. Strategic planning to reduce medical errors: Part I--diagnosis.

    PubMed

    Waldman, J Deane; Smith, Howard L

    2012-01-01

    Despite extensive dialogue and a continuing stream of proposed medical practice revisions, medical errors and adverse impacts persist. Connectivity of vital elements is often underestimated or not fully understood. This paper analyzes medical errors from a systems dynamics viewpoint (Part I). Our analysis suggests in Part II that the most fruitful strategies for dissolving medical errors include facilitating physician learning, educating patients about appropriate expectations surrounding treatment regimens, and creating "systematic" patient protections rather than depending on (nonexistent) perfect providers.

  15. Safe Passage

    ERIC Educational Resources Information Center

    Razwick, Jeff

    2007-01-01

    Many schools are almost entirely reliant on alarms and sprinklers for their fire protection. As these devices need to be triggered and supplied with power or water to work properly, they are vulnerable to errors. To provide adequate safety, a good fire-protection program must have three primary elements: fire protection and suppression, and…

  16. Re-Assessing Poverty Dynamics and State Protections in Britain and the US: The Role of Measurement Error

    ERIC Educational Resources Information Center

    Worts, Diana; Sacker, Amanda; McDonough, Peggy

    2010-01-01

    This paper addresses a key methodological challenge in the modeling of individual poverty dynamics--the influence of measurement error. Taking the US and Britain as case studies and building on recent research that uses latent Markov models to reduce bias, we examine how measurement error can affect a range of important poverty estimates. Our data…

  17. Content-based multiple bitstream image transmission over noisy channels.

    PubMed

    Cao, Lei; Chen, Chang Wen

    2002-01-01

    In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.

  18. USDA APHIS | Wildlife Damage

    Science.gov Websites

    & Speeches USDA Newsroom Videos Pet Travel Blog Z6_LO4C1BS0LO4EB0AER7MEEI2G47 Error Error Biosecurity ESF11 Farm Bill Horse Protection Hungry Pests Pet Travel Trade Veterinary Accreditation USDA.gov

  19. MISR: protection from ourselves

    NASA Technical Reports Server (NTRS)

    Nolan, T.; Varanasi, P.

    2004-01-01

    Outlines lessons learned by the Instrument Operations Team of NASA/JPL Terra's Multi-angle Imaging SpectroRadiometer mission. It narrates a story of MISR: Protection from Ourselves! and describes, in detail, how the MISR instrument survived operator errors.

  20. Fault-tolerant quantum error detection

    PubMed Central

    Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher

    2017-01-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889

  1. Canadian drivers' attitudes regarding preventative responses to driving while impaired by alcohol.

    PubMed

    Vanlaar, Ward; Nadeau, Louise; McKiernan, Anna; Hing, Marisela M; Ouimet, Marie Claude; Brown, Thomas G

    2017-09-01

    In many jurisdictions, a risk assessment following a first driving while impaired (DWI) offence is used to guide administrative decision making regarding driver relicensing. Decision error in this process has important consequences for public security on one hand, and the social and economic well being of drivers on the other. Decision theory posits that consideration of the costs and benefits of decision error is needed, and in the public health context, this should include community attitudes. The objective of the present study was to clarify whether Canadians prefer decision error that: i) better protects the public (i.e., false positives); or ii) better protects the offender (i.e., false negatives). A random sample of male and female adult drivers (N=1213) from the five most populated regions of Canada was surveyed on drivers' preference for a protection of the public approach versus a protection of DWI drivers approach in resolving assessment decision error, and the relative value (i.e., value ratio) they imparted to both approaches. The role of region, sex and age on drivers' value ratio were also appraised. Seventy percent of Canadian drivers preferred a protection of the public from DWI approach, with the overall relative ratio given to this preference, compared to the alternative protection of the driver approach, being 3:1. Females expressed a significantly higher value ratio (M=3.4, SD=3.5) than males (M=3.0, SD=3.4), p<0.05. Regression analysis showed that both days of alcohol use in the past 30days (CI for B: -0.07, -0.02) and frequency of driving over legal BAC limits in the past year (CI for B=-0.19, -0.01) were significantly but modestly related to lower value ratios, R 2 (adj.)=0.014, p<0.001. Regional differences were also detected. Canadian drivers strongly favour a protection of the public approach to dealing with uncertainty in assessment, even at the risk of false positives. Accounting for community attitudes concerning DWI prevention and the individual differences that influence them could contribute to more informed, coherent and effective regional policies and prevention program development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes

    NASA Astrophysics Data System (ADS)

    Marvian, Milad; Lidar, Daniel A.

    2017-01-01

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  3. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes.

    PubMed

    Marvian, Milad; Lidar, Daniel A

    2017-01-20

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  4. Recursive Construction of Noiseless Subsystem for Qudits

    NASA Astrophysics Data System (ADS)

    Güngördü, Utkan; Li, Chi-Kwong; Nakahara, Mikio; Poon, Yiu-Tung; Sze, Nung-Sing

    2014-03-01

    When the environmental noise acting on the system has certain symmetries, a subsystem of the total system can avoid errors. Encoding information into such a subsystem is advantageous since it does not require any error syndrome measurements, which may introduce further errors to the system. However, utilizing such a subsystem for large systems gets impractical with the increasing number of qudits. A recursive scheme offers a solution to this problem. Here, we review the recursive construct introduced in, which can asymptotically protect 1/d of the qudits in system against collective errors.

  5. Entanglement renormalization, quantum error correction, and bulk causality

    NASA Astrophysics Data System (ADS)

    Kim, Isaac H.; Kastoryano, Michael J.

    2017-04-01

    Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.

  6. [Individual prevention of occupational contact dermatitis: protective gloves and skin protection recommendations as part of the patient management scheme by the public statutory employers' liability insurance].

    PubMed

    Wilke, A; Skudlik, C; Sonsmann, F K

    2018-05-02

    The dermatologist's procedure is a pivotal tool for early recognition of occupational contact dermatitis (OCD), for reporting OCD cases to the statutory accident insurance and for treating the diseases. The employer is in charge of implementing skin protection measures at the workplace. However, in terms of an individual prevention approach it may be necessary to propose targeted skin protection recommendations in specific patient cases. The patient's own skin protection behavior significantly contributes to regenerating and maintaining healthy skin. This behavior includes the use of occupational skin products, and in particular the correct use of appropriately selected protective gloves. Protective gloves are the most important personal protective measure in the prevention of OCD. Prevention services, occupational health and safety specialists, occupational physicians and centers specialized in occupational dermatology can support the identification of suitable protective measures. Nowadays, suitable protective gloves exist for (almost) every occupational activity and exposure. However, improper use in practice can become a risk factor by itself for the skin (e. g., incorrectly used gloves). Therefore, it is of utmost importance to identify application errors, to educate patients in terms of skin protection and to motivate them to perform an appropriate skin protection behavior. With particular focus on protective gloves, this article gives an overview of various types, materials and potentially glove-related allergens, presents strategies for reducing occlusion effects and discusses some typical application errors and solutions.

  7. IPTV multicast with peer-assisted lossy error control

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  8. Efficacy of monitoring and empirical predictive modeling at improving public health protection at Chicago beaches

    USGS Publications Warehouse

    Nevers, Meredith B.; Whitman, Richard L.

    2011-01-01

    Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.

  9. Experimental implementation of the Bacon-Shor code with 10 entangled photons

    NASA Astrophysics Data System (ADS)

    Gimeno-Segovia, Mercedes; Sanders, Barry C.

    The number of qubits that can be effectively controlled in quantum experiments is growing, reaching a regime where small quantum error-correcting codes can be tested. The Bacon-Shor code is a simple quantum code that protects against the effect of an arbitrary single-qubit error. In this work, we propose an experimental implementation of said code in a post-selected linear optical setup, similar to the recently reported 10-photon GHZ generation experiment. In the procedure we propose, an arbitrary state is encoded into the protected Shor code subspace, and after undergoing a controlled single-qubit error, is successfully decoded. BCS appreciates financial support from Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter, which is an NSF Physics Frontiers Center(NSF Grant PHY-1125565) with support of the Moore Foundation(GBMF-2644).

  10. A radiation tolerant Data link board for the ATLAS Tile Cal upgrade

    NASA Astrophysics Data System (ADS)

    Åkerstedt, H.; Bohm, C.; Muschter, S.; Silverstein, S.; Valdes, E.

    2016-01-01

    This paper describes the latest, full-functionality revision of the high-speed data link board developed for the Phase-2 upgrade of ATLAS hadronic Tile Calorimeter. The link board design is highly redundant, with digital functionality implemented in two Xilinx Kintex-7 FPGAs, and two Molex QSFP+ electro-optic modules with uplinks run at 10 Gbps. The FPGAs are remotely configured through two radiation-hard CERN GBTx deserialisers (GBTx), which also provide the LHC-synchronous system clock. The redundant design eliminates virtually all single-point error modes, and a combination of triple-mode redundancy (TMR), internal and external scrubbing will provide adequate protection against radiation-induced errors. The small portion of the FPGA design that cannot be protected by TMR will be the dominant source of radiation-induced errors, even if that area is small.

  11. [Topical problems of sanitary and epidemiologic examination concerning projects of sanitary protection zones in airports].

    PubMed

    Isayeva, A M; Zibaryov, E V

    2015-01-01

    The article covers data on major errors in sanitary protection zones specification for civil airports, revealed through sanitary epidemiologic examination. The authors focus attention on necessity to develop unified methodic approach to evaluation of aviation noise effects, when justifying sanitary protection zone of airports and examining sanitary and epidemiologic project documents.

  12. Topological Qubits from Valence Bond Solids

    NASA Astrophysics Data System (ADS)

    Wang, Dong-Sheng; Affleck, Ian; Raussendorf, Robert

    2018-05-01

    Topological qubits based on S U (N )-symmetric valence-bond solid models are constructed. A logical topological qubit is the ground subspace with twofold degeneracy, which is due to the spontaneous breaking of a global parity symmetry. A logical Z rotation by an angle 2 π /N , for any integer N >2 , is provided by a global twist operation, which is of a topological nature and protected by the energy gap. A general concatenation scheme with standard quantum error-correction codes is also proposed, which can lead to better codes. Generic error-correction properties of symmetry-protected topological order are also demonstrated.

  13. Havens: Explicit Reliable Memory Regions for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    2016-01-01

    Supporting error resilience in future exascale-class supercomputing systems is a critical challenge. Due to transistor scaling trends and increasing memory density, scientific simulations are expected to experience more interruptions caused by transient errors in the system memory. Existing hardware-based detection and recovery techniques will be inadequate to manage the presence of high memory fault rates. In this paper we propose a partial memory protection scheme based on region-based memory management. We define the concept of regions called havens that provide fault protection for program objects. We provide reliability for the regions through a software-based parity protection mechanism. Our approach enablesmore » critical program objects to be placed in these havens. The fault coverage provided by our approach is application agnostic, unlike algorithm-based fault tolerance techniques.« less

  14. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

  15. 40 CFR 1065.602 - Statistics.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... a gas concentration is measured continuously from the raw exhaust of an engine, its flow-weighted...

  16. Evidence for aversive withdrawal response to own errors.

    PubMed

    Hochman, Eldad Yitzhak; Milman, Valery; Tal, Liron

    2017-10-01

    Recent model suggests that error detection gives rise to defensive motivation prompting protective behavior. Models of active avoidance behavior predict it should grow larger with threat imminence and avoidance. We hypothesized that in a task requiring left or right key strikes, error detection would drive an avoidance reflex manifested by rapid withdrawal of an erring finger growing larger with threat imminence and avoidance. In experiment 1, three groups differing by error-related threat imminence and avoidance performed a flanker task requiring left or right force sensitive-key strikes. As predicted, errors were followed by rapid force release growing faster with threat imminence and opportunity to evade threat. In experiment 2, we established a link between error key release time (KRT) and the subjective sense of inner-threat. In a simultaneous, multiple regression analysis of three error-related compensatory mechanisms (error KRT, flanker effect, error correction RT), only error KRT was significantly associated with increased compulsive checking tendencies. We propose that error response withdrawal reflects an error-withdrawal reflex. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. 45 CFR 61.6 - Reporting errors, omissions, revisions or whether an action is on appeal.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Reporting errors, omissions, revisions or whether an action is on appeal. 61.6 Section 61.6 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION HEALTHCARE INTEGRITY AND PROTECTION DATA BANK FOR FINAL ADVERSE INFORMATION ON...

  18. 45 CFR 61.6 - Reporting errors, omissions, revisions or whether an action is on appeal.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Reporting errors, omissions, revisions or whether an action is on appeal. 61.6 Section 61.6 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION HEALTHCARE INTEGRITY AND PROTECTION DATA BANK FOR FINAL ADVERSE INFORMATION ON...

  19. 45 CFR 61.6 - Reporting errors, omissions, revisions or whether an action is on appeal.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Reporting errors, omissions, revisions or whether an action is on appeal. 61.6 Section 61.6 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION HEALTHCARE INTEGRITY AND PROTECTION DATA BANK FOR FINAL ADVERSE INFORMATION ON...

  20. Breaches of health information: are electronic records different from paper records?

    PubMed

    Sade, Robert M

    2010-01-01

    Breaches of electronic medical records constitute a type of healthcare error, but should be considered separately from other types of errors because the national focus on the security of electronic data justifies special treatment of medical information breaches. Guidelines for protecting electronic medical records should be applied equally to paper medical records.

  1. 40 CFR 1065.602 - Statistics.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... measured continuously from the raw exhaust of an engine, its flow-weighted mean concentration is the sum of...

  2. 40 CFR 1065.602 - Statistics.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... measured continuously from the raw exhaust of an engine, its flow-weighted mean concentration is the sum of...

  3. Efficient detection of dangling pointer error for C/C++ programs

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzhe

    2017-08-01

    Dangling pointer error is pervasive in C/C++ programs and it is very hard to detect. This paper introduces an efficient detector to detect dangling pointer error in C/C++ programs. By selectively leave some memory accesses unmonitored, our method could reduce the memory monitoring overhead and thus achieves better performance over previous methods. Experiments show that our method could achieve an average speed up of 9% over previous compiler instrumentation based method and more than 50% over previous page protection based method.

  4. Apply network coding for H.264/SVC multicasting

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Kuo, C.-C. Jay

    2008-08-01

    In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.

  5. Palmprint Based Multidimensional Fuzzy Vault Scheme

    PubMed Central

    Liu, Hailun; Sun, Dongmei; Xiong, Ke; Qiu, Zhengding

    2014-01-01

    Fuzzy vault scheme (FVS) is one of the most popular biometric cryptosystems for biometric template protection. However, error correcting code (ECC) proposed in FVS is not appropriate to deal with real-valued biometric intraclass variances. In this paper, we propose a multidimensional fuzzy vault scheme (MDFVS) in which a general subspace error-tolerant mechanism is designed and embedded into FVS to handle intraclass variances. Palmprint is one of the most important biometrics; to protect palmprint templates; a palmprint based MDFVS implementation is also presented. Experimental results show that the proposed scheme not only can deal with intraclass variances effectively but also could maintain the accuracy and meanwhile enhance security. PMID:24892094

  6. Benchmarking Distance Control and Virtual Drilling for Lateral Skull Base Surgery.

    PubMed

    Voormolen, Eduard H J; Diederen, Sander; van Stralen, Marijn; Woerdeman, Peter A; Noordmans, Herke Jan; Viergever, Max A; Regli, Luca; Robe, Pierre A; Berkelbach van der Sprenkel, Jan Willem

    2018-01-01

    Novel audiovisual feedback methods were developed to improve image guidance during skull base surgery by providing audiovisual warnings when the drill tip enters a protective perimeter set at a distance around anatomic structures ("distance control") and visualizing bone drilling ("virtual drilling"). To benchmark the drill damage risk reduction provided by distance control, to quantify the accuracy of virtual drilling, and to investigate whether the proposed feedback methods are clinically feasible. In a simulated surgical scenario using human cadavers, 12 unexperienced users (medical students) drilled 12 mastoidectomies. Users were divided into a control group using standard image guidance and 3 groups using distance control with protective perimeters of 1, 2, or 3 mm. Damage to critical structures (sigmoid sinus, semicircular canals, facial nerve) was assessed. Neurosurgeons performed another 6 mastoidectomy/trans-labyrinthine and retro-labyrinthine approaches. Virtual errors as compared with real postoperative drill cavities were calculated. In a clinical setting, 3 patients received lateral skull base surgery with the proposed feedback methods. Users drilling with distance control protective perimeters of 3 mm did not damage structures, whereas the groups using smaller protective perimeters and the control group injured structures. Virtual drilling maximum cavity underestimations and overestimations were 2.8 ± 0.1 and 3.3 ± 0.4 mm, respectively. Feedback methods functioned properly in the clinical setting. Distance control reduced the risks of drill damage proportional to the protective perimeter distance. Errors in virtual drilling reflect spatial errors of the image guidance system. These feedback methods are clinically feasible. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE PAGES

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...

    2017-02-15

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  8. Creating illusions of knowledge: learning errors that contradict prior knowledge.

    PubMed

    Fazio, Lisa K; Barber, Sarah J; Rajaram, Suparna; Ornstein, Peter A; Marsh, Elizabeth J

    2013-02-01

    Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors (e.g., "Franklin invented the light bulb"). On a later general-knowledge test, participants reproduced story errors despite previously answering the questions correctly. This misinformation effect was found even for questions that were answered correctly on the initial test with the highest level of confidence. Furthermore, prior knowledge offered no protection against errors entering the knowledge base; the misinformation effect was equivalent for previously known and unknown facts. Errors can enter the knowledge base even when learners have the knowledge necessary to catch the errors. 2013 APA, all rights reserved

  9. Transient fault behavior in a microprocessor: A case study

    NASA Technical Reports Server (NTRS)

    Duba, Patrick

    1989-01-01

    An experimental analysis is described which studies the susceptibility of a microprocessor based jet engine controller to upsets caused by current and voltage transients. A design automation environment which allows the run time injection of transients and the tracing from their impact device to the pin level is described. The resulting error data are categorized by the charge levels of the injected transients by location and by their potential to cause logic upsets, latched errors, and pin errors. The results show a 3 picoCouloumb threshold, below which the transients have little impact. An Arithmetic and Logic Unit transient is most likely to result in logic upsets and pin errors (i.e., impact the external environment). The transients in the countdown unit are potentially serious since they can result in latched errors, thus causing latent faults. Suggestions to protect the processor against these errors, by incorporating internal error detection and transient suppression techniques, are also made.

  10. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    PubMed Central

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter

    2017-01-01

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466

  11. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  12. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  13. Pilot interaction with automated airborne decision making systems

    NASA Technical Reports Server (NTRS)

    Hammer, John M.; Wan, C. Yoon; Vasandani, Vijay

    1987-01-01

    The current research is focused on detection of human error and protection from its consequences. A program for monitoring pilot error by comparing pilot actions to a script was described. It dealt primarily with routine errors (slips) that occurred during checklist activity. The model to which operator actions were compared was a script. Current research is an extension along these two dimensions. The ORS fault detection aid uses a sophisticated device model rather than a script. The newer initiative, the model-based and constraint-based warning system, uses an even more sophisticated device model and is to prevent all types of error, not just slips or bad decision.

  14. Color vision deficits and laser eyewear protection for soft tissue laser applications.

    PubMed

    Teichman, J M; Vassar, G J; Yates, J T; Angle, B N; Johnson, A J; Dirks, M S; Thompson, I M

    1999-03-01

    Laser safety considerations require urologists to wear laser eye protection. Laser eye protection devices block transmittance of specific light wavelengths and may distort color perception. We tested whether urologists risk color confusion when wearing laser eye protection devices for laser soft tissue applications. Subjects were tested with the Farnsworth-Munsell 100-Hue Test without (controls) and with laser eye protection devices for carbon dioxide, potassium titanyl phosphate (KTP), neodymium (Nd):YAG and holmium:YAG lasers. Color deficits were characterized by error scores, polar graphs, confusion angles, confusion index, scatter index and color axes. Laser eye protection device spectral transmittance was tested with spectrophotometry. Mean total error scores plus or minus standard deviation were 13+/-5 for controls, and 44+/-31 for carbon dioxide, 273+/-26 for KTP, 22+/-6 for Nd:YAG and 14+/-8 for holmium:YAG devices (p <0.001). The KTP laser eye protection polar graphs, and confusion and scatter indexes revealed moderate blue-yellow and red-green color confusion. Color axes indicated no significant deficits for controls, or carbon dioxide, Nd:YAG or holmium:YAG laser eye protection in any subject compared to blue-yellow color vision deficits in 8 of 8 tested with KTP laser eye protection (p <0.001). Spectrophotometry demonstrated that light was blocked with laser eye protection devices for carbon dioxide less than 380, holmium:YAG greater than 850, Nd:YAG less than 350 and greater than 950, and KTP less than 550 and greater than 750 nm. The laser eye protection device for KTP causes significant blue-yellow and red-green color confusion. Laser eye protection devices for carbon dioxide, holmium:YAG and Nd:YAG cause no significant color confusion compared to controls. The differences are explained by laser eye protection spectrophotometry characteristics and visual physiology.

  15. Seven-year incidence of uncorrected refractive error among an elderly Chinese population in Shihpai, Taiwan: The Shihpai Eye Study

    PubMed Central

    Kuang, T-M; Tsai, S-Y; Liu, C J-L; Ko, Y-C; Lee, S-M; Chou, P

    2016-01-01

    Purpose To report the 7-year incidence of uncorrected refractive error in a metropolitan Chinese elderly population. Methods The Shihpai Eye Study 2006 included 460/824 (55.8%) subjects (age range 72–94 years old) of 1361 participants in the 1999 baseline survey for a follow-up eye examination. Visual acuity was assessed using a Snellen chart, uncorrected refractive error was defined as presenting visual acuity (naked eye if without spectacles and with distance spectacles if worn) in the better eye of <6/12 that improved to no impairment (≥6/12) after refractive correction. Results The 7-year incidence of uncorrected refractive error was 10.5% (95% confidence interval (CI): 7.6–13.4%). 92.7% of participants with uncorrection and 77.8% with undercorrection were able to improve at least two lines of visual acuity by refractive correction. In multivariate analysis controlling for covariates, uncorrected refractive error was significantly related to myopia (relative risk (RR): 3.15; 95% CI: 1.31–7.58) and living alone (RR: 2.94; 95% CI 1.14–7.53), whereas distance spectacles worn during examination was protective (RR: 0.35; 95% CI: 0.14–0.88). Conclusion Our study indicated that the incidence of uncorrected refractive error was high (10.5%) in this elderly Chinese population. Living alone and myopia are predisposing factors, whereas wearing distance spectacles at examination is protective. PMID:26795416

  16. The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    PubMed Central

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana

    2011-01-01

    Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777

  17. A Goal VPN Protection Profile for Protecting Sensitive Information

    DTIC Science & Technology

    2000-07-10

    security for the systems in which they are used. Nothing could be further from the truth . There are no perfect security solutions, and no...establishment/termination, failures, and errors); • provide for directly connected (local hard -wire connection) and remote (over the network) interfaces... the TOERU is left unattended procedures such as media encryption or secure storage of the hard drive, will be used to insure the protection of stored

  18. A method of predicting flow rates required to achieve anti-icing performance with a porous leading edge ice protection system

    NASA Technical Reports Server (NTRS)

    Kohlman, D. L.; Albright, A. E.

    1983-01-01

    An analytical method was developed for predicting minimum flow rates required to provide anti-ice protection with a porous leading edge fluid ice protection system. The predicted flow rates compare with an average error of less than 10 percent to six experimentally determined flow rates from tests in the NASA Icing Research Tunnel on a general aviation wing section.

  19. Critical older driver errors in a national sample of serious U.S. crashes.

    PubMed

    Cicchino, Jessica B; McCartt, Anne T

    2015-07-01

    Older drivers are at increased risk of crash involvement per mile traveled. The purpose of this study was to examine older driver errors in serious crashes to determine which errors are most prevalent. The National Highway Traffic Safety Administration's National Motor Vehicle Crash Causation Survey collected in-depth, on-scene data for a nationally representative sample of 5470 U.S. police-reported passenger vehicle crashes during 2005-2007 for which emergency medical services were dispatched. There were 620 crashes involving 647 drivers aged 70 and older, representing 250,504 crash-involved older drivers. The proportion of various critical errors made by drivers aged 70 and older were compared with those made by drivers aged 35-54. Driver error was the critical reason for 97% of crashes involving older drivers. Among older drivers who made critical errors, the most common were inadequate surveillance (33%) and misjudgment of the length of a gap between vehicles or of another vehicle's speed, illegal maneuvers, medical events, and daydreaming (6% each). Inadequate surveillance (33% vs. 22%) and gap or speed misjudgment errors (6% vs. 3%) were more prevalent among older drivers than middle-aged drivers. Seventy-one percent of older drivers' inadequate surveillance errors were due to looking and not seeing another vehicle or failing to see a traffic control rather than failing to look, compared with 40% of inadequate surveillance errors among middle-aged drivers. About two-thirds (66%) of older drivers' inadequate surveillance errors and 77% of their gap or speed misjudgment errors were made when turning left at intersections. When older drivers traveled off the edge of the road or traveled over the lane line, this was most commonly due to non-performance errors such as medical events (51% and 44%, respectively), whereas middle-aged drivers were involved in these crash types for other reasons. Gap or speed misjudgment errors and inadequate surveillance errors were significantly more prevalent among female older drivers than among female middle-aged drivers, but the prevalence of these errors did not differ significantly between older and middle-aged male drivers. These errors comprised 51% of errors among older female drivers but only 31% among older male drivers. Efforts to reduce older driver crash involvements should focus on diminishing the likelihood of the most common driver errors. Countermeasures that simplify or remove the need to make left turns across traffic such as roundabouts, protected left turn signals, and diverging diamond intersection designs could decrease the frequency of inadequate surveillance and gap or speed misjudgment errors. In the future, vehicle-to-vehicle and vehicle-to-infrastructure communications may also help protect older drivers from these errors. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Utilization of robotic-arm assisted total knee arthroplasty for soft tissue protection.

    PubMed

    Sultan, Assem A; Piuzzi, Nicolas; Khlopas, Anton; Chughtai, Morad; Sodhi, Nipun; Mont, Michael A

    2017-12-01

    Despite the well-established success of total knee arthroplasty (TKA), iatrogenic ligamentous and soft tissue injuries are infrequent, but potential complications that can have devastating impact on clinical outcomes. These injuries are often related to technical errors and excessive soft tissue manipulation, particularly during bony resections. Recently, robotic-arm assisted TKA was introduced and demonstrated promising results with potential technical advantages over manual surgery in implant positioning and mechanical accuracy. Furthermore, soft tissue protection is an additional potential advantage offered by these systems that can reduce inadvertent human technical errors encountered during standard manual resections. Therefore, due to the relative paucity of literature, we attempted to answer the following questions: 1) does robotic-arm assisted TKA offer a technical advantage that allows enhanced soft tissue protection? 2) What is the available evidence about soft tissue protection? Recently introduced models of robotic-arm assisted TKA systems with advanced technology showed promising clinical outcomes and soft tissue protection in the short- and mid-term follow-up with results comparable or superior to manual TKA. In this review, we attempted to explore this dimension of robotics in TKA and investigate the soft tissue related complications currently reported in the literature.

  1. Dynamically protected cat-qubits: a new paradigm for universal quantum computation

    NASA Astrophysics Data System (ADS)

    Mirrahimi, Mazyar; Leghtas, Zaki; Albert, Victor V.; Touzard, Steven; Schoelkopf, Robert J.; Jiang, Liang; Devoret, Michel H.

    2014-04-01

    We present a new hardware-efficient paradigm for universal quantum computation which is based on encoding, protecting and manipulating quantum information in a quantum harmonic oscillator. This proposal exploits multi-photon driven dissipative processes to encode quantum information in logical bases composed of Schrödinger cat states. More precisely, we consider two schemes. In a first scheme, a two-photon driven dissipative process is used to stabilize a logical qubit basis of two-component Schrödinger cat states. While such a scheme ensures a protection of the logical qubit against the photon dephasing errors, the prominent error channel of single-photon loss induces bit-flip type errors that cannot be corrected. Therefore, we consider a second scheme based on a four-photon driven dissipative process which leads to the choice of four-component Schrödinger cat states as the logical qubit. Such a logical qubit can be protected against single-photon loss by continuous photon number parity measurements. Next, applying some specific Hamiltonians, we provide a set of universal quantum gates on the encoded qubits of each of the two schemes. In particular, we illustrate how these operations can be rendered fault-tolerant with respect to various decoherence channels of participating quantum systems. Finally, we also propose experimental schemes based on quantum superconducting circuits and inspired by methods used in Josephson parametric amplification, which should allow one to achieve these driven dissipative processes along with the Hamiltonians ensuring the universal operations in an efficient manner.

  2. A Quantitative Analysis of the Effect of Simulation on Medication Administration in Nursing Students

    ERIC Educational Resources Information Center

    Scudmore, Casey

    2013-01-01

    Medication errors are a leading cause of injury and death in health care, and nurses are the last line of defense for patient safety. Nursing educators must develop curriculum to effectively teach nursing students to prevent medication errors and protect the public. The purpose of this quantitative, quasi-experimental study was to determine if…

  3. Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.

    2006-12-01

    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.

  4. Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiala, David J; Mueller, Frank; Engelmann, Christian

    Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less

  5. Video Transmission for Third Generation Wireless Communication Systems

    PubMed Central

    Gharavi, H.; Alamouti, S. M.

    2001-01-01

    This paper presents a twin-class unequal protected video transmission system over wireless channels. Video partitioning based on a separation of the Variable Length Coded (VLC) Discrete Cosine Transform (DCT) coefficients within each block is considered for constant bitrate transmission (CBR). In the splitting process the fraction of bits assigned to each of the two partitions is adjusted according to the requirements of the unequal error protection scheme employed. Subsequently, partitioning is applied to the ITU-T H.263 coding standard. As a transport vehicle, we have considered one of the leading third generation cellular radio standards known as WCDMA. A dual-priority transmission system is then invoked on the WCDMA system where the video data, after being broken into two streams, is unequally protected. We use a very simple error correction coding scheme for illustration and then propose more sophisticated forms of unequal protection of the digitized video signals. We show that this strategy results in a significantly higher quality of the reconstructed video data when it is transmitted over time-varying multipath fading channels. PMID:27500033

  6. Protection: clarifying the concept for use in nursing practice.

    PubMed

    Lorenz, Susan G

    2007-01-01

    The protection of patients is integral in any healthcare setting. Healthcare organizations are increasingly held accountable for preventable medical errors, the attitudes toward safety, and communication among all levels of providers, collaborative practices, and recognition of risks. The concept of protection is inherent in nursing practice. It provides a framework, that further defines healthcare provider's roles in meeting these imperatives. The scope of protection is considered both globally and individually prominent. Nurses protect patients from environmental hazards, themselves, and any perceived threat. In this analysis of the phenomenon, the concept is clarified, and an evidence-based approach to protection is utilized for theory development and concept measurement.

  7. Integrator Windup Protection-Techniques and a STOVL Aircraft Engine Controller Application

    NASA Technical Reports Server (NTRS)

    KrishnaKumar, K.; Narayanaswamy, S.

    1997-01-01

    Integrators are included in the feedback loop of a control system to eliminate the steady state errors in the commanded variables. The integrator windup problem arises if the control actuators encounter operational limits before the steady state errors are driven to zero by the integrator. The typical effects of windup are large system oscillations, high steady state error, and a delayed system response following the windup. In this study, methods to prevent the integrator windup are examined to provide Integrator Windup Protection (IW) for an engine controller of a Short Take-Off and Vertical Landing (STOVL) aircraft. An unified performance index is defined to optimize the performance of the Conventional Anti-Windup (CAW) and the Modified Anti-Windup (MAW) methods. A modified Genetic Algorithm search procedure with stochastic parameter encoding is implemented to obtain the optimal parameters of the CAW scheme. The advantages and drawbacks of the CAW and MAW techniques are discussed and recommendations are made for the choice of the IWP scheme, given some characteristics of the system.

  8. Practicality of performing medical procedures in chemical protective ensembles.

    PubMed

    Garner, Alan; Laurence, Helen; Lee, Anna

    2004-04-01

    To determine whether certain life saving medical procedures can be successfully performed while wearing different levels of personal protective equipment (PPE), and whether these procedures can be performed in a clinically useful time frame. We assessed the capability of eight medical personnel to perform airway maintenance and antidote administration procedures on manikins, in all four described levels of PPE. The levels are: Level A--a fully encapsulated chemically resistant suit; Level B--a chemically resistant suit, gloves and boots with a full-faced positive pressure supplied air respirator; Level C--a chemically resistant splash suit, boots and gloves with an air-purifying positive or negative pressure respirator; Level D--a work uniform. Time in seconds to inflate the lungs of the manikin with bag-valve-mask, laryngeal mask airway (LMA) and endotracheal tube (ETT) were determined, as was the time to secure LMAs and ETTs with either tape or linen ties. Time to insert a cannula in a manikin was also determined. There was a significant difference in time taken to perform procedures in differing levels of personal protective equipment (F21,72 = 1.75, P = 0.04). Significant differences were found in: time to lung inflation using an endotracheal tube (A vs. C mean difference and standard error 75.6 +/- 23.9 s, P = 0.03; A vs. D mean difference and standard error 78.6 +/- 23.9 s, P = 0.03); time to insert a cannula (A vs. D mean difference and standard error 63.6 +/- 11.1 s, P < 0.001; C vs. D mean difference and standard error 40.0 +/- 11.1 s, P = 0.01). A significantly greater time to complete procedures was documented in Level A PPE (fully encapsulated suits) compared with Levels C and D. There was however, no significant difference in times between Level B and Level C. The common practice of equipping hospital and medical staff with only Level C protection should be re-evaluated.

  9. Five-wave-packet quantum error correction based on continuous-variable cluster entanglement

    PubMed Central

    Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi

    2015-01-01

    Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395

  10. Designing an algorithm to preserve privacy for medical record linkage with error-prone data.

    PubMed

    Pal, Doyel; Chen, Tingting; Zhong, Sheng; Khethavath, Praveen

    2014-01-20

    Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients' privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other's database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other's database.

  11. Spatial autocorrelation among automated geocoding errors and its effects on testing for disease clustering

    PubMed Central

    Li, Jie; Fang, Xiangming

    2010-01-01

    Automated geocoding of patient addresses is an important data assimilation component of many spatial epidemiologic studies. Inevitably, the geocoding process results in positional errors. Positional errors incurred by automated geocoding tend to reduce the power of tests for disease clustering and otherwise affect spatial analytic methods. However, there are reasons to believe that the errors may often be positively spatially correlated and that this may mitigate their deleterious effects on spatial analyses. In this article, we demonstrate explicitly that the positional errors associated with automated geocoding of a dataset of more than 6000 addresses in Carroll County, Iowa are spatially autocorrelated. Furthermore, through two simulation studies of disease processes, including one in which the disease process is overlain upon the Carroll County addresses, we show that spatial autocorrelation among geocoding errors maintains the power of two tests for disease clustering at a level higher than that which would occur if the errors were independent. Implications of these results for cluster detection, privacy protection, and measurement-error modeling of geographic health data are discussed. PMID:20087879

  12. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  13. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  14. The Effect of Response Time on Conjoint Analysis Estimates of Rainforest Protection Values

    Treesearch

    Thomas Holmes; Keith Alger; Christian Zinkhan; D. Evan Mercer

    1998-01-01

    This paper reports the first estimutes of willingness to pay (WTP) for rain forest protection in the threatened Atlantic Coastal Forest ecosystem in northeastern Brazil. Conjoint analysis data were collected from Brazilian tourists for recreational bundles with complex prices. An ordered probit model with time-varying parameters and heteroskedastic errors was...

  15. Reply to "Comment on `Protecting bipartite entanglement by quantum interferences' "

    NASA Astrophysics Data System (ADS)

    Das, Sumanta; Agarwal, G. S.

    2018-03-01

    In a recent Comment Nair and Arun, Phys. Rev. A 97, 036301 (2018), 10.1103/PhysRevA.97.036301, it was concluded that the two-qubit entanglement protection reported in our work [Das and Agarwal, Phys. Rev. A 81, 052341 (2010), 10.1103/PhysRevA.81.052341] is erroneous. While we acknowledge the error in analytical results on concurrence when dipole matrix elements were unequal, the essential conclusions on entanglement protection are not affected.

  16. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    PubMed

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  17. Implications of Spatial Data Variations for Protected Areas Management: An Example from East Africa

    NASA Astrophysics Data System (ADS)

    Dowhaniuk, Nicholas; Hartter, Joel; Ryan, Sadie J.

    2014-09-01

    Geographic information systems and remote sensing technologies have become an important tool for visualizing conservation management and developing solutions to problems associated with conservation. When multiple organizations separately develop spatial data representations of protected areas, implicit error arises due to variation between data sets. We used boundary data produced by three conservation organizations (International Union for the Conservation of Nature, World Resource Institute, and Uganda Wildlife Authority), for seven Ugandan parks, to study variation in the size represented and the location of boundaries. We found variation in the extent of overlapping total area encompassed by the three data sources, ranging from miniscule (0.4 %) differences to quite large ones (9.0 %). To underscore how protected area boundary discrepancies may have implications to protected area management, we used a landcover classification, defining crop, shrub, forest, savanna, and grassland. The total area in the different landcover classes varied most in smaller protected areas (those less than 329 km2), with forest and cropland area estimates varying up to 65 %. The discrepancies introduced by boundary errors could, in this hypothetical case, generate erroneous findings and could have a significant impact on conservation, such as local-scale management for encroachment and larger-scale assessments of deforestation.

  18. Implications of spatial data variations for protected areas management: an example from East Africa.

    PubMed

    Dowhaniuk, Nicholas; Hartter, Joel; Ryan, Sadie J

    2014-09-01

    Geographic information systems and remote sensing technologies have become an important tool for visualizing conservation management and developing solutions to problems associated with conservation. When multiple organizations separately develop spatial data representations of protected areas, implicit error arises due to variation between data sets. We used boundary data produced by three conservation organizations (International Union for the Conservation of Nature, World Resource Institute, and Uganda Wildlife Authority), for seven Ugandan parks, to study variation in the size represented and the location of boundaries. We found variation in the extent of overlapping total area encompassed by the three data sources, ranging from miniscule (0.4 %) differences to quite large ones (9.0 %). To underscore how protected area boundary discrepancies may have implications to protected area management, we used a landcover classification, defining crop, shrub, forest, savanna, and grassland. The total area in the different landcover classes varied most in smaller protected areas (those less than 329 km(2)), with forest and cropland area estimates varying up to 65 %. The discrepancies introduced by boundary errors could, in this hypothetical case, generate erroneous findings and could have a significant impact on conservation, such as local-scale management for encroachment and larger-scale assessments of deforestation.

  19. Non-commuting two-local Hamiltonians for quantum error suppression

    NASA Astrophysics Data System (ADS)

    Jiang, Zhang; Rieffel, Eleanor G.

    2017-04-01

    Physical constraints make it challenging to implement and control many-body interactions. For this reason, designing quantum information processes with Hamiltonians consisting of only one- and two-local terms is a worthwhile challenge. Enabling error suppression with two-local Hamiltonians is particularly challenging. A no-go theorem of Marvian and Lidar (Phys Rev Lett 113(26):260504, 2014) demonstrates that, even allowing particles with high Hilbert space dimension, it is impossible to protect quantum information from single-site errors by encoding in the ground subspace of any Hamiltonian containing only commuting two-local terms. Here, we get around this no-go result by encoding in the ground subspace of a Hamiltonian consisting of non-commuting two-local terms arising from the gauge operators of a subsystem code. Specifically, we show how to protect stored quantum information against single-qubit errors using a Hamiltonian consisting of sums of the gauge generators from Bacon-Shor codes (Bacon in Phys Rev A 73(1):012340, 2006) and generalized-Bacon-Shor code (Bravyi in Phys Rev A 83(1):012320, 2011). Our results imply that non-commuting two-local Hamiltonians have more error-suppressing power than commuting two-local Hamiltonians. While far from providing full fault tolerance, this approach improves the robustness achievable in near-term implementable quantum storage and adiabatic quantum computations, reducing the number of higher-order terms required to encode commonly used adiabatic Hamiltonians such as the Ising Hamiltonians common in adiabatic quantum optimization and quantum annealing.

  20. Reliability of Memories Protected by Multibit Error Correction Codes Against MBUs

    NASA Astrophysics Data System (ADS)

    Ming, Zhu; Yi, Xiao Li; Chang, Liu; Wei, Zhang Jian

    2011-02-01

    As technology scales, more and more memory cells can be placed in a die. Therefore, the probability that a single event induces multiple bit upsets (MBUs) in adjacent memory cells gets greater. Generally, multibit error correction codes (MECCs) are effective approaches to mitigate MBUs in memories. In order to evaluate the robustness of protected memories, reliability models have been widely studied nowadays. Instead of irradiation experiments, the models can be used to quickly evaluate the reliability of memories in the early design. To build an accurate model, some situations should be considered. Firstly, when MBUs are presented in memories, the errors induced by several events may overlap each other, which is more frequent than single event upset (SEU) case. Furthermore, radiation experiments show that the probability of MBUs strongly depends on angles of the radiation event. However, reliability models which consider the overlap of multiple bit errors and angles of radiation event have not been proposed in the present literature. In this paper, a more accurate model of memories with MECCs is presented. Both the overlap of multiple bit errors and angles of event are considered in the model, which produces a more precise analysis in the calculation of mean time to failure (MTTF) for memory systems under MBUs. In addition, memories with scrubbing and nonscrubbing are analyzed in the proposed model. Finally, we evaluate the reliability of memories under MBUs in Matlab. The simulation results verify the validity of the proposed model.

  1. Recent study, but not retrieval, of knowledge protects against learning errors.

    PubMed

    Mullet, Hillary G; Umanath, Sharda; Marsh, Elizabeth J

    2014-11-01

    Surprisingly, people incorporate errors into their knowledge bases even when they have the correct knowledge stored in memory (e.g., Fazio, Barber, Rajaram, Ornstein, & Marsh, 2013). We examined whether heightening the accessibility of correct knowledge would protect people from later reproducing misleading information that they encountered in fictional stories. In Experiment 1, participants studied a series of target general knowledge questions and their correct answers either a few minutes (high accessibility of knowledge) or 1 week (low accessibility of knowledge) before exposure to misleading story references. In Experiments 2a and 2b, participants instead retrieved the answers to the target general knowledge questions either a few minutes or 1 week before the rest of the experiment. Reading the relevant knowledge directly before the story-reading phase protected against reproduction of the misleading story answers on a later general knowledge test, but retrieving that same correct information did not. Retrieving stored knowledge from memory might actually enhance the encoding of relevant misinformation.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Paul A.; Cooper, Candice Frances; Burnett, Damon J.

    Light body armor development for the warfighter is based on trial-and-error testing of prototype designs against ballistic projectiles. Torso armor testing against blast is virtually nonexistent but necessary to ensure adequate protection against injury to the heart and lungs. In this report, we discuss the development of a high-fidelity human torso model, it's merging with the existing Sandia Human Head-Neck Model, and development of the modeling & simulation (M&S) capabilities necessary to simulate wound injury scenarios. Using the new Sandia Human Torso Model, we demonstrate the advantage of virtual simulation in the investigation of wound injury as it relates tomore » the warfighter experience. We present the results of virtual simulations of blast loading and ballistic projectile impact to the tors o with and without notional protective armor. In this manner, we demonstrate the ad vantages of applying a modeling and simulation approach to the investigation of wound injury and relative merit assessments of protective body armor without the need for trial-and-error testing.« less

  3. IRBs, conflict and liability: will we see IRBs in court? Or is it when?

    PubMed

    Icenogle, Daniel L

    2003-01-01

    The entire human research infrastructure is under intense and increasing financial pressure. These pressures may have been responsible for several errors in judgment by those responsible for managing human research and protecting human subjects. The result of these errors has been some terrible accidents, some of which have cost the lives of human research volunteers. This, in turn, is producing both increased liability risk for those who manage the various aspects of human research and increasing scrutiny as to the capability of the human research protection structure as currently constituted. It is the author's contention that the current structure is fully capable of offering sufficient protection for participants in human research-if Institutional Review Board (IRB) staff and members are given sufficient resources and perform their tasks with sufficient responsibility. The status quo alternative is that IRBs and their members will find themselves at great risk of becoming defendants in lawsuits seeking compensation for damages resulting from human experimentation gone awry.

  4. Multi-qubit gates protected by adiabaticity and dynamical decoupling applicable to donor qubits in silicon

    DOE PAGES

    Witzel, Wayne; Montano, Ines; Muller, Richard P.; ...

    2015-08-19

    In this paper, we present a strategy for producing multiqubit gates that promise high fidelity with minimal tuning requirements. Our strategy combines gap protection from the adiabatic theorem with dynamical decoupling in a complementary manner. Energy-level transition errors are protected by adiabaticity and remaining phase errors are mitigated via dynamical decoupling. This is a powerful way to divide and conquer the various error channels. In order to accomplish this without violating a no-go theorem regarding black-box dynamically corrected gates [Phys. Rev. A 80, 032314 (2009)], we require a robust operating point (sweet spot) in control space where the qubits interactmore » with little sensitivity to noise. There are also energy gap requirements for effective adiabaticity. We apply our strategy to an architecture in Si with P donors where we assume we can shuttle electrons between different donors. Electron spins act as mobile ancillary qubits and P nuclear spins act as long-lived data qubits. Furthermore, this system can have a very robust operating point where the electron spin is bound to a donor in the quadratic Stark shift regime. High fidelity single qubit gates may be performed using well-established global magnetic resonance pulse sequences. Single electron-spin preparation and measurement has also been demonstrated. Thus, putting this all together, we present a robust universal gate set for quantum computation.« less

  5. Jeffrey Baldwin: A Thematic Analysis of Media Coverage and Implications for Social Work Practice

    ERIC Educational Resources Information Center

    Choate, Peter W.

    2017-01-01

    Jeffery Baldwin died in 2002 in the care of his maternal grandparents. The case received intense media attention at various times over an almost eight-year period. Along with other public documents, the media coverage permits an analysis of the practice errors by Child Protection Services that are related to the failure to protect Jeffrey. Nine…

  6. Memory and the Moses illusion: failures to detect contradictions with stored knowledge yield negative memorial consequences.

    PubMed

    Bottoms, Hayden C; Eslick, Andrea N; Marsh, Elizabeth J

    2010-08-01

    Although contradictions with stored knowledge are common in daily life, people often fail to notice them. For example, in the Moses illusion, participants fail to notice errors in questions such as "How many animals of each kind did Moses take on the Ark?" despite later showing knowledge that the Biblical reference is to Noah, not Moses. We examined whether error prevalence affected participants' ability to detect distortions in questions, and whether this in turn had memorial consequences. Many of the errors were overlooked, but participants were better able to catch them when they were more common. More generally, the failure to detect errors had negative memorial consequences, increasing the likelihood that the errors were used to answer later general knowledge questions. Methodological implications of this finding are discussed, as it suggests that typical analyses likely underestimate the size of the Moses illusion. Overall, answering distorted questions can yield errors in the knowledge base; most importantly, prior knowledge does not protect against these negative memorial consequences.

  7. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  8. Fault tolerant computing: A preamble for assuring viability of large computer systems

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1977-01-01

    The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.

  9. Medicaid/CHIP Program; Medicaid Program and Children's Health Insurance Program (CHIP); Changes to the Medicaid Eligibility Quality Control and Payment Error Rate Measurement Programs in Response to the Affordable Care Act. Final rule.

    PubMed

    2017-07-05

    This final rule updates the Medicaid Eligibility Quality Control (MEQC) and Payment Error Rate Measurement (PERM) programs based on the changes to Medicaid and the Children's Health Insurance Program (CHIP) eligibility under the Patient Protection and Affordable Care Act. This rule also implements various other improvements to the PERM program.

  10. Basic Studies on High Pressure Air Plasmas

    DTIC Science & Technology

    2006-08-30

    which must be added a 1.5 month salary to A. Bugayev for assistance in laser and optic techniques. 2 Part II Technical report Plasma-induced phase shift...two-wavelength heterodyne interferometry applied to atmospheric pressure air plasma 11.1 .A. Plasma-induced phase shift - Electron density...a driver, since the error on the frequency leads to an error on the phase shift. (c) Optical elements Mirrors Protected mirrors must be used to stand

  11. Designing an Algorithm to Preserve Privacy for Medical Record Linkage With Error-Prone Data

    PubMed Central

    Pal, Doyel; Chen, Tingting; Khethavath, Praveen

    2014-01-01

    Background Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients’ privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. Objective To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. Methods To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. Results We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other’s database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Conclusions Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other’s database. PMID:25600786

  12. Multi-bits error detection and fast recovery in RISC cores

    NASA Astrophysics Data System (ADS)

    Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu

    2015-11-01

    The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.

  13. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  14. Strain gage installation and survivability on geosynthetics used in flexible pavements

    NASA Astrophysics Data System (ADS)

    Brooks, Jeremy A.

    The use of foil type strain gages on geosynthetics is poorly documented. In addition, very few individuals are versed in proper installation techniques or calibration methods. Due to the limited number of knowledgeable technicians there is no information regarding the susceptibility of theses gages to errors in installation by inexperienced installers. Also lacking in the documentation related to the use of foil type strain gages on geosynthetics is the survivability of the gages in field conditions. This research documented procedures for installation, calibration, and survivability used by the project team to instruments a full scale field installation in Marked Tree, AR. This research also addressed sensitivity to installation errors on both geotextile and geogrid. To document the process of gage installation an experienced technician, Mr. Joe Ables, formerly of the UASCE Waterways Experiment Station, was consulted. His techniques were combined with those discovered in related literature and those developed by the research team to develop processes that were adaptable to multiple gage geometries and parent geosynthetics. These processes were described and documented in a step by step manner with accompanying photographs, which should allow virtually anyone with basic electronics knowledge to install these gages properly. Calibration of the various geosynthetic / strain gage combinations was completed using wide width tensile testing on multiple samples of each material. The tensile testing process was documented and analyzed using digital photography to analyze strain on the strain gage itself. Calibration factors for each geosynthtics used in the full scale field testing were developed. In addition, the process was thoroughly documented to allow future researchers to calibrate additional strain gage and geosynthetic combinations. The sensitivity of the strain gages to installation errors was analyzed using wide width tensile testing and digital photography to determine the variability of the data collected from gages with noticeable installation errors as compared to properly installed gages. Induced errors varied based on the parent geosynthetics material, but included excessive and minimal waterproofing, gage rotation, gage shift, excessive and minimal adhesive, and excessive and minimal adhesive impregnation loads. The results of this work indicated that minor errors in geotextile gage installation that are noticeable and preventable by the experienced installer have no statistical significance on the data recorded during the life span of geotextile gages; however the lifespan of the gage may be noticeably shortened by such errors. Geogrid gage installation errors were found to cause statistically significant changes in the data recorded from improper installations. The issue of gage survivability was analyzed using small scale test sections instrumented and loaded similarly to field conditions anticipated during traditional roadway construction. Five methods of protection were tested for both geotextile and geogrid including a sand blanket, inversion, semi-hemispherical PCV sections, neoprene mats, and geosynthetic wick drain. Based on this testing neoprene mats were selected to protect geotextile installed gages, and wick drains were selected to protect geogrid installed gages. These methods resulted in survivability rates of 73% and 100% in the full scale installation respectively. This research and documentation may be used to train technicians to install and calibrate geosynthetic mounted foil type strain gages. In addition, technicians should be able to install gages in the field with a high probability of gage survivability using the protection methods recommended.

  15. Market mechanisms protect the vulnerable brain.

    PubMed

    Ramchandran, Kanchna; Nayakankuppam, Dhananjay; Berg, Joyce; Tranel, Daniel; Denburg, Natalie L

    2011-07-01

    Markets are mechanisms of social exchange, intended to facilitate trading. However, the question remains as to whether markets would help or hurt individuals with decision-makings deficits, as is frequently encountered in the case of cognitive aging. Essential for predicting future gains and losses in monetary and social domains, the striatal nuclei in the brain undergo structural, neurochemical, and functional decline with age. We correlated the efficacy of market mechanisms with dorsal striatal decline in an aging population, by using market based trading in the context of the 2008 U.S. Presidential Elections (primary cycle). Impaired decision-makers displayed higher prediction error (difference between their prediction and actual outcome). Lower in vivo caudate volume was also associated with higher prediction error. Importantly, market-based trading protected older adults with lower caudate volume to a greater extent from their own poorly calibrated predictions. Counterintuitive to the traditional public perception of the market as a fickle, risky proposition where vulnerable traders are most surely to be burned, we suggest that market-based mechanisms protect individuals with brain-based decision-making vulnerabilities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Market mechanisms protect the vulnerable brain

    PubMed Central

    Ramchandran, Kanchna; Nayakankuppam, Dhananjay; Berg, Joyce; Tranel, Daniel

    2011-01-01

    Markets are mechanisms of social exchange, intended to facilitate trading. However, the question remains as to whether markets would help or hurt individuals with decision-makings deficits, as is frequently encountered in the case of cognitive aging. Essential for predicting future gains and losses in monetary and social domains, the striatal nuclei in the brain undergo structural, neurochemical, and functional decline with age. We correlated the efficacy of market mechanisms with dorsal striatal decline in an aging population, by using market based trading in the context of the 2008 U.S Presidential Elections (primary cycle). Impaired decision-makers displayed higher prediction error (difference between their prediction and actual outcome). Lower in vivo caudate volume was also associated with higher prediction error. Importantly, market-based trading protected older adults with lower caudate volume to a greater extent from their own poorly calibrated predictions. Counterintuitive to the traditional public perception of the market as a fickle, risky proposition where vulnerable traders are most surely to be burned, we suggest that market-based mechanisms protect individuals with brain-based decision-making vulnerabilities. PMID:21600226

  17. Autonomous Quantum Error Correction with Application to Quantum Metrology

    NASA Astrophysics Data System (ADS)

    Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.

    2017-04-01

    We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  18. Radiation-Hardened Solid-State Drive

    NASA Technical Reports Server (NTRS)

    Sheldon, Douglas J.

    2010-01-01

    A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.

  19. Robust quantum logic in neutral atoms via adiabatic Rydberg dressing

    DOE PAGES

    Keating, Tyler; Cook, Robert L.; Hankin, Aaron M.; ...

    2015-01-28

    We study a scheme for implementing a controlled-Z (CZ) gate between two neutral-atom qubits based on the Rydberg blockade mechanism in a manner that is robust to errors caused by atomic motion. By employing adiabatic dressing of the ground electronic state, we can protect the gate from decoherence due to random phase errors that typically arise because of atomic thermal motion. In addition, the adiabatic protocol allows for a Doppler-free configuration that involves counterpropagating lasers in a σ +/σ - orthogonal polarization geometry that further reduces motional errors due to Doppler shifts. The residual motional error is dominated by dipole-dipolemore » forces acting on doubly-excited Rydberg atoms when the blockade is imperfect. As a result, for reasonable parameters, with qubits encoded into the clock states of 133Cs, we predict that our protocol could produce a CZ gate in < 10 μs with error probability on the order of 10 -3.« less

  20. RAIM availability for supplemental GPS navigation

    DOT National Transportation Integrated Search

    1992-06-29

    This paper examines GPS receiver autonomous integrity monitoring (RAIM) availability for supplemental navigation based on the approximate radial-error protection (ARP) method. This method applies ceiling levels for the ARP figure of merit to screen o...

  1. The Seven Deadly Sins of Online Microcomputing.

    ERIC Educational Resources Information Center

    King, Alan

    1989-01-01

    Offers suggestions for avoiding common errors in online microcomputer use. Areas discussed include learning the basics; hardware protection; backup options; hard disk organization; software selection; file security; and the use of dedicated communications lines. (CLB)

  2. Protective gloves for use in high-risk patients: how much do they affect the dexterity of the surgeon?

    PubMed Central

    Phillips, A. M.; Birch, N. C.; Ribbans, W. J.

    1997-01-01

    Twenty-five orthopaedic surgeons underwent eight motor and sensory tests while using four different glove combinations and without gloves. As well as single and double latex, surgeons wore a simple Kevlar glove with latex inside and outside and then wore a Kevlar and Medak glove with latex inside and outside, as recommended by the manufacturers. The effect of learning with each sequence was neutralised by randomising the glove order. The time taken to complete each test was recorded and, where appropriate, error rates were noted. Simple sensory tests took progressively longer to perform so that using the thickest glove combination led to the completion times being doubled. Error rates increased significantly. Tests of stereognosis also took longer and use of the thickest glove combination caused these tests to take three times as long on average. Error rates again increased significantly. However, prolongation of motor tasks was less marked. We conclude that, armed with this quantitative analysis of sensitivity and dexterity impairment, surgeons can judge the relative difficulties that may be incurred as a result of wearing the gloves against the benefits that they offer in protection. PMID:9135240

  3. Physician's error: medical or legal concept?

    PubMed

    Mujovic-Zornic, Hajrija M

    2010-06-01

    This article deals with the common term of different physician's errors that often happen in daily practice of health care. Author begins with the term of medical malpractice, defined broadly as practice of unjustified acts or failures to act upon the part of a physician or other health care professionals, which results in harm to the patient. It is a common term that includes many types of medical errors, especially physician's errors. The author also discusses the concept of physician's error in particular, which is understood no more in traditional way only as classic error in acting something manually wrong without necessary skills (medical concept), but as an error which violates patient's basic rights and which has its final legal consequence (legal concept). In every case the essential element of liability is to establish this error as a breach of the physician's duty. The first point to note is that the standard of procedure and the standard of due care against which the physician will be judged is not going to be that of the ordinary reasonable man who enjoys no medical expertise. The court's decision should give finale answer and legal qualification in each concrete case. The author's conclusion is that higher protection of human rights in the area of health equaly demands broader concept of physician's error with the accent to its legal subject matter.

  4. Estimating the designated use attainment decision error rates of US Environmental Protection Agency's proposed numeric total phosphorus criteria for Florida, USA, colored lakes.

    PubMed

    McLaughlin, Douglas B

    2012-01-01

    The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors. Copyright © 2011 SETAC.

  5. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    PubMed

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  6. The role of usability in the evaluation of accidents: human error or design flaw?

    PubMed

    Correia, Walter; Soares, Marcelo; Barros, Marina; Campos, Fábio

    2012-01-01

    This article aims to highlight the role of consumer products companies in the heart and the extent of accidents involving these types of products, and as such undesired events take part as an agent in influencing decision making for the purchase of a product that nature on the part of consumers and users. The article demonstrates, by reference, interviews and case studies such as the development of poorly designed products and design errors of design can influence the usage behavior of users, thus leading to accidents, and also negatively affect the next image of a company. The full explanation of these types of questions aims to raise awareness, plan on a reliable usability, users and consumers in general about the safe use of consumer products, and also safeguard their rights before a legal system of consumer protection, even far away by the CDC--Code of Consumer Protection.

  7. A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories

    NASA Astrophysics Data System (ADS)

    Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon

    BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.

  8. #2 - An Empirical Assessment of Exposure Measurement Error ...

    EPA Pesticide Factsheets

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  9. Medication errors in chemotherapy preparation and administration: a survey conducted among oncology nurses in Turkey.

    PubMed

    Ulas, Arife; Silay, Kamile; Akinci, Sema; Dede, Didem Sener; Akinci, Muhammed Bulent; Sendur, Mehmet Ali Nahit; Cubukcu, Erdem; Coskun, Hasan Senol; Degirmenci, Mustafa; Utkan, Gungor; Ozdemir, Nuriye; Isikdogan, Abdurrahman; Buyukcelik, Abdullah; Inanc, Mevlude; Bilici, Ahmet; Odabasi, Hatice; Cihan, Sener; Avci, Nilufer; Yalcin, Bulent

    2015-01-01

    Medication errors in oncology may cause severe clinical problems due to low therapeutic indices and high toxicity of chemotherapeutic agents. We aimed to investigate unintentional medication errors and underlying factors during chemotherapy preparation and administration based on a systematic survey conducted to reflect oncology nurses experience. This study was conducted in 18 adult chemotherapy units with volunteer participation of 206 nurses. A survey developed by primary investigators and medication errors (MAEs) defined preventable errors during prescription of medication, ordering, preparation or administration. The survey consisted of 4 parts: demographic features of nurses; workload of chemotherapy units; errors and their estimated monthly number during chemotherapy preparation and administration; and evaluation of the possible factors responsible from ME. The survey was conducted by face to face interview and data analyses were performed with descriptive statistics. Chi-square or Fisher exact tests were used for a comparative analysis of categorical data. Some 83.4% of the 210 nurses reported one or more than one error during chemotherapy preparation and administration. Prescribing or ordering wrong doses by physicians (65.7%) and noncompliance with administration sequences during chemotherapy administration (50.5%) were the most common errors. The most common estimated average monthly error was not following the administration sequence of the chemotherapeutic agents (4.1 times/month, range 1-20). The most important underlying reasons for medication errors were heavy workload (49.7%) and insufficient number of staff (36.5%). Our findings suggest that the probability of medication error is very high during chemotherapy preparation and administration, the most common involving prescribing and ordering errors. Further studies must address the strategies to minimize medication error in chemotherapy receiving patients, determine sufficient protective measures and establishing multistep control mechanisms.

  10. Superconducting quantum circuits at the surface code threshold for fault tolerance.

    PubMed

    Barends, R; Kelly, J; Megrant, A; Veitia, A; Sank, D; Jeffrey, E; White, T C; Mutus, J; Fowler, A G; Campbell, B; Chen, Y; Chen, Z; Chiaro, B; Dunsworth, A; Neill, C; O'Malley, P; Roushan, P; Vainsencher, A; Wenner, J; Korotkov, A N; Cleland, A N; Martinis, John M

    2014-04-24

    A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.

  11. Selection of wires and circuit protective devices for STS Orbiter vehicle payload electrical circuits

    NASA Technical Reports Server (NTRS)

    Gaston, Darilyn M.

    1991-01-01

    Electrical designers of Orbiter payloads face the challenge of determining proper circuit protection/wire size parameters to satisfy Orbiter engineering and safety requirements. This document is the result of a program undertaken to review test data from all available aerospace sources and perform additional testing to eliminate extrapolation errors. The resulting compilation of data was used to develop guidelines for the selection of wire sizes and circuit protection ratings. The purpose is to provide guidance to the engineering to ensure a design which meets Orbiter standards and which should be applicable to any aerospace design.

  12. Evaluation of measurement errors of temperature and relative humidity from HOBO data logger under different conditions of exposure to solar radiation.

    PubMed

    da Cunha, Antonio Ribeiro

    2015-05-01

    This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.

  13. Coherent Oscillations inside a Quantum Manifold Stabilized by Dissipation

    NASA Astrophysics Data System (ADS)

    Touzard, S.; Grimm, A.; Leghtas, Z.; Mundhada, S. O.; Reinhold, P.; Axline, C.; Reagor, M.; Chou, K.; Blumoff, J.; Sliwa, K. M.; Shankar, S.; Frunzio, L.; Schoelkopf, R. J.; Mirrahimi, M.; Devoret, M. H.

    2018-04-01

    Manipulating the state of a logical quantum bit (qubit) usually comes at the expense of exposing it to decoherence. Fault-tolerant quantum computing tackles this problem by manipulating quantum information within a stable manifold of a larger Hilbert space, whose symmetries restrict the number of independent errors. The remaining errors do not affect the quantum computation and are correctable after the fact. Here we implement the autonomous stabilization of an encoding manifold spanned by Schrödinger cat states in a superconducting cavity. We show Zeno-driven coherent oscillations between these states analogous to the Rabi rotation of a qubit protected against phase flips. Such gates are compatible with quantum error correction and hence are crucial for fault-tolerant logical qubits.

  14. Demonstration of Protection of a Superconducting Qubit from Energy Decay

    NASA Astrophysics Data System (ADS)

    Lin, Yen-Hsiang; Nguyen, Long B.; Grabon, Nicholas; San Miguel, Jonathan; Pankratova, Natalia; Manucharyan, Vladimir E.

    2018-04-01

    Long-lived transitions occur naturally in atomic systems due to the abundance of selection rules inhibiting spontaneous emission. By contrast, transitions of superconducting artificial atoms typically have large dipoles, and hence their lifetimes are determined by the dissipative environment of a macroscopic electrical circuit. We designed a multilevel fluxonium artificial atom such that the qubit's transition dipole can be exponentially suppressed by flux tuning, while it continues to dispersively interact with a cavity mode by virtual transitions to the noncomputational states. Remarkably, energy decay time T1 grew by 2 orders of magnitude, proportionally to the inverse square of the transition dipole, and exceeded the benchmark value of T1>2 ms (quality factor Q1>4 ×107) without showing signs of saturation. The dephasing time was limited by the first-order coupling to flux noise to about 4 μ s . Our circuit validated the general principle of hardware-level protection against bit-flip errors and can be upgraded to the 0 -π circuit [P. Brooks, A. Kitaev, and J. Preskill, Phys. Rev. A 87, 052306 (2013), 10.1103/PhysRevA.87.052306], adding protection against dephasing and certain gate errors.

  15. A system-level approach for embedded memory robustness

    NASA Astrophysics Data System (ADS)

    Mariani, Riccardo; Boschi, Gabriele

    2005-11-01

    New ultra-deep submicron technologies are bringing not only new advantages such extraordinary transistor densities or unforeseen performances, but also new uncertainties such soft-error susceptibility, modelling complexity, coupling effects, leakage contribution and increased sensitivity to internal and external disturbs. Nowadays, embedded memories are taking profit of such new technologies and they are more and more used in systems: therefore as robustness and reliability requirement increase, memory systems must be protected against different kind of faults (permanent and transient) and that should be done in an efficient way. It means that reliability and costs, such overhead and performance degradation, must be efficiently tuned based on the system and on the application. Moreover, the new emerging norms for safety-critical applications such IEC 61508 are requiring precise answers in terms of robustness also in the case of memory systems. In this paper, classical protection techniques for error detection and correction are enriched with a system-aware approach, where the memory system is analyzed based on its role in the application. A configurable memory protection system is presented, together with the results of its application to a proof-of-concept architecture. This work has been developed in the framework of MEDEA+ T126 project called BLUEBERRIES.

  16. Attention and memory protection: Interactions between retrospective attention cueing and interference.

    PubMed

    Makovski, Tal; Pertzov, Yoni

    2015-01-01

    Visual working memory (VWM) and attention have a number of features in common, but despite extensive research it is still unclear how the two interact. Can focused attention improve VWM precision? Can it protect VWM from interference? Here we used a partial-report, continuous-response orientation memory task to examine how attention and interference affect different aspects of VWM and how they interact with one another. Both attention and interference were orthogonally manipulated during the retention interval. Attention was manipulated by presenting informative retro-cues, whereas interference was manipulated by introducing a secondary interfering task. Mixture-model analyses revealed that retro-cues, compared to uninformative cues, improved all aspects of performance: Attention increased recall precision and decreased guessing rate and swap-errors (reporting a wrong item in memory). Similarly, performing a secondary task impaired all aspects of the VWM task. In particular, an interaction between retro-cue and secondary task interference was found primarily for swap-errors. Together these results suggest that both the quantity and quality of VWM representations are sensitive to attention cueing and interference modulations, and they highlight the role of attention in protecting the feature-location associations needed to access the correct items in memory.

  17. Robust dynamical decoupling for quantum computing and quantum memory.

    PubMed

    Souza, Alexandre M; Alvarez, Gonzalo A; Suter, Dieter

    2011-06-17

    Dynamical decoupling (DD) is a popular technique for protecting qubits from the environment. However, unless special care is taken, experimental errors in the control pulses used in this technique can destroy the quantum information instead of preserving it. Here, we investigate techniques for making DD sequences robust against different types of experimental errors while retaining good decoupling efficiency in a fluctuating environment. We present experimental data from solid-state nuclear spin qubits and introduce a new DD sequence that is suitable for quantum computing and quantum memory.

  18. Curated eutherian third party data gene data sets.

    PubMed

    Premzl, Marko

    2016-03-01

    The free available eutherian genomic sequence data sets advanced scientific field of genomics. Of note, future revisions of gene data sets were expected, due to incompleteness of public eutherian genomic sequence assemblies and potential genomic sequence errors. The eutherian comparative genomic analysis protocol was proposed as guidance in protection against potential genomic sequence errors in public eutherian genomic sequences. The protocol was applicable in updates of 7 major eutherian gene data sets, including 812 complete coding sequences deposited in European Nucleotide Archive as curated third party data gene data sets.

  19. Protecting Trade Secrets in Canada

    PubMed Central

    Courage, Noel; Calzavara, Janice

    2015-01-01

    Patents in the life sciences industries are a key form of intellectual property (IP), particularly for products such as brand-name drugs and medical devices. However, trade secrets can also be a useful tool for many types of innovations. In appropriate cases, trade secrets can offer long-term protection of IP for a lower financial cost than patenting. This type of protection must be approached with caution as there is little room for error when protecting a trade secret. Strong agreements and scrupulous security can help to protect the secret. Once a trade secret is disclosed to the public, it cannot be restored as the owner's property; however, if the information is kept from the public domain, the owner can have a property right of unlimited duration in the information. In some situations patents and trade secrets may be used cooperatively to protect innovation, particularly for manufacturing processes. PMID:25986591

  20. [From the concept of guilt to the value-free notification of errors in medicine. Risks, errors and patient safety].

    PubMed

    Haller, U; Welti, S; Haenggi, D; Fink, D

    2005-06-01

    The number of liability cases but also the size of individual claims due to alleged treatment errors are increasing steadily. Spectacular sentences, especially in the USA, encourage this trend. Wherever human beings work, errors happen. The health care system is particularly susceptible and shows a high potential for errors. Therefore risk management has to be given top priority in hospitals. Preparing the introduction of critical incident reporting (CIR) as the means to notify errors is time-consuming and calls for a change in attitude because in many places the necessary base of trust has to be created first. CIR is not made to find the guilty and punish them but to uncover the origins of errors in order to eliminate them. The Department of Anesthesiology of the University Hospital of Basel has developed an electronic error notification system, which, in collaboration with the Swiss Medical Association, allows each specialist society to participate electronically in a CIR system (CIRS) in order to create the largest database possible and thereby to allow statements concerning the extent and type of error sources in medicine. After a pilot project in 2000-2004, the Swiss Society of Gynecology and Obstetrics is now progressively introducing the 'CIRS Medical' of the Swiss Medical Association. In our country, such programs are vulnerable to judicial intervention due to the lack of explicit legal guarantees of protection. High-quality data registration and skillful counseling are all the more important. Hospital directors and managers are called upon to examine those incidents which are based on errors inherent in the system.

  1. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  2. Improving Water Quality Assessments through a HierarchicalBayesian Analysis of Variability

    EPA Science Inventory

    Water quality measurement error and variability, while well-documented in laboratory-scale studies, is rarely acknowledged or explicitly resolved in most water body assessments, including those conducted in compliance with the United States Environmental Protection Agency (USEPA)...

  3. 77 FR 72984 - Buprofezin Pesticide Tolerances; Technical Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-07

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 180 [EPA-HQ-OPP-2011-0759; FRL-9371-3] Buprofezin..., 2012, concerning buprofezin pesticide tolerances. This document corrects a typographical error. DATES...: Sec. 180.511 Buprofezin; tolerances for residues. (a) * * * Parts per Commodity million...

  4. 78 FR 41061 - Information Collection Request Submitted to OMB for Review and Approval; Comment Request; NESHAP...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... preferred method), by email to: [email protected] , or by mail to: EPA Docket Center, Environmental... an adjustment decrease in the Agency labor costs due to a correction of mathematical error. The...

  5. A multibiometric face recognition fusion framework with template protection

    NASA Astrophysics Data System (ADS)

    Chindaro, S.; Deravi, F.; Zhou, Z.; Ng, M. W. R.; Castro Neves, M.; Zhou, X.; Kelkboom, E.

    2010-04-01

    In this work we present a multibiometric face recognition framework based on combining information from 2D with 3D facial features. The 3D biometrics channel is protected by a privacy enhancing technology, which uses error correcting codes and cryptographic primitives to safeguard the privacy of the users of the biometric system at the same time enabling accurate matching through fusion with 2D. Experiments are conducted to compare the matching performance of such multibiometric systems with the individual biometric channels working alone and with unprotected multibiometric systems. The results show that the proposed hybrid system incorporating template protection, match and in some cases exceed the performance of corresponding unprotected equivalents, in addition to offering the additional privacy protection.

  6. An 802.11 n wireless local area network transmission scheme for wireless telemedicine applications.

    PubMed

    Lin, C F; Hung, S I; Chiang, I H

    2010-10-01

    In this paper, an 802.11 n transmission scheme is proposed for wireless telemedicine applications. IEEE 802.11n standards, a power assignment strategy, space-time block coding (STBC), and an object composition Petri net (OCPN) model are adopted. With the proposed wireless system, G.729 audio bit streams, Joint Photographic Experts Group 2000 (JPEG 2000) clinical images, and Moving Picture Experts Group 4 (MPEG-4) video bit streams achieve a transmission bit error rate (BER) of 10-7, 10-4, and 103 simultaneously. The proposed system meets the requirements prescribed for wireless telemedicine applications. An essential feature of this proposed transmission scheme is that clinical information that requires a high quality of service (QoS) is transmitted at a high power transmission rate with significant error protection. For maximizing resource utilization and minimizing the total transmission power, STBC and adaptive modulation techniques are used in the proposed 802.11 n wireless telemedicine system. Further, low power, direct mapping (DM), low-error protection scheme, and high-level modulation are adopted for messages that can tolerate a high BER. With the proposed transmission scheme, the required reliability of communication can be achieved. Our simulation results have shown that the proposed 802.11 n transmission scheme can be used for developing effective wireless telemedicine systems.

  7. Quantum error suppression with commuting Hamiltonians: two local is too local.

    PubMed

    Marvian, Iman; Lidar, Daniel A

    2014-12-31

    We consider error suppression schemes in which quantum information is encoded into the ground subspace of a Hamiltonian comprising a sum of commuting terms. Since such Hamiltonians are gapped, they are considered natural candidates for protection of quantum information and topological or adiabatic quantum computation. However, we prove that they cannot be used to this end in the two-local case. By making the favorable assumption that the gap is infinite, we show that single-site perturbations can generate a degeneracy splitting in the ground subspace of this type of Hamiltonian which is of the same order as the magnitude of the perturbation, and is independent of the number of interacting sites and their Hilbert space dimensions, just as in the absence of the protecting Hamiltonian. This splitting results in decoherence of the ground subspace, and we demonstrate that for natural noise models the coherence time is proportional to the inverse of the degeneracy splitting. Our proof involves a new version of the no-hiding theorem which shows that quantum information cannot be approximately hidden in the correlations between two quantum systems. The main reason that two-local commuting Hamiltonians cannot be used for quantum error suppression is that their ground subspaces have only short-range (two-body) entanglement.

  8. Integrity monitoring of vehicle positioning in urban environment using RTK-GNSS, IMU and speedometer

    NASA Astrophysics Data System (ADS)

    El-Mowafy, Ahmed; Kubo, Nobuaki

    2017-05-01

    Continuous and trustworthy positioning is a critical capability for advanced driver assistance systems (ADAS). To achieve continuous positioning, methods such as global navigation satellite systems real-time kinematic (RTK), Doppler-based positioning, and positioning using low-cost inertial measurement unit (IMU) with car speedometer data are combined in this study. To ensure reliable positioning, the system should have integrity monitoring above a certain level, such as 99%. Achieving this level when combining different types of measurements that have different characteristics and different types of errors is a challenge. In this study, a novel integrity monitoring approach is presented for the proposed integrated system. A threat model of the measurements of the system components is discussed, which includes both the nominal performance and possible fault modes. A new protection level is presented to bound the maximum directional position error. The proposed approach was evaluated through a kinematic test in an urban area in Japan with a focus on horizontal positioning. Test results show that by integrating RTK, Doppler with IMU/speedometer, 100% positioning availability was achieved. The integrity monitoring availability was assessed and found to meet the target value where the position errors were bounded by the protection level, which was also less than an alert level, indicating the effectiveness of the proposed approach.

  9. Designing an efficient LT-code with unequal error protection for image transmission

    NASA Astrophysics Data System (ADS)

    S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.

    2015-10-01

    The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression recommended by CCSDS. In fact, to design a LT-code with an unequal error protection, the bit stream produced by the algorithm recommended by CCSDS must be partitioned in M disjoint sets of bits. Using the weighted approach, the LT-code produces M different failure probabilities for each set of bits, p1, ..., pM leading to a total probability of failure, p which is an average of p1, ..., pM. In general, the parameters of the LT-code with unequal error protection is chosen using a heuristic procedure. In this work, we analyze the problem of choosing the LT-code parameters to optimize two figure of merits: (a) the probability of achieving a minimum acceptable PSNR, and (b) the mean of PSNR, given that the minimum acceptable PSNR has been achieved. Given the rate-distortion curve achieved by CCSDS recommended algorithm, this work establishes a closed form of the mean of PSNR (given that the minimum acceptable PSNR has been achieved) as a function of p1, ..., pM. The main contribution of this work is the study of a criteria to select the parameters p1, ..., pM to optimize the performance of image transmission.

  10. Using concatenated quantum codes for universal fault-tolerant quantum gates.

    PubMed

    Jochym-O'Connor, Tomas; Laflamme, Raymond

    2014-01-10

    We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.

  11. High Reliability Organizations--Medication Safety.

    PubMed

    Yip, Luke; Farmer, Brenna

    2015-06-01

    High reliability organizations (HROs), such as the aviation industry, successfully engage in high-risk endeavors and have low incidence of adverse events. HROs have a preoccupation with failure and errors. They analyze each event to effect system wide change in an attempt to mitigate the occurrence of similar errors. The healthcare industry can adapt HRO practices, specifically with regard to teamwork and communication. Crew resource management concepts can be adapted to healthcare with the use of certain tools such as checklists and the sterile cockpit to reduce medication errors. HROs also use The Swiss Cheese Model to evaluate risk and look for vulnerabilities in multiple protective barriers, instead of focusing on one failure. This model can be used in medication safety to evaluate medication management in addition to using the teamwork and communication tools of HROs.

  12. THE U.S. ENVIRONMENTAL PROTECTION AGENCY VERSION OF POSITIVE MATRIX FACTORIZATION

    EPA Science Inventory

    The abstract describes some of the special features of the EPA's version of Positive Matrix Factorization that is freely distributed. Features include descriptions of the Graphical User Interface, an approach for estimating errors in the modeled solutions, and future development...

  13. Finite Difference Schemes as Algebraic Correspondences between Layers

    NASA Astrophysics Data System (ADS)

    Malykh, Mikhail; Sevastianov, Leonid

    2018-02-01

    For some differential equations, especially for Riccati equation, new finite difference schemes are suggested. These schemes define protective correspondences between the layers. Calculation using these schemes can be extended to the area beyond movable singularities of exact solution without any error accumulation.

  14. 78 FR 18977 - Agency Information Collection Activities; Submission to OMB for Review and Approval; Comment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-28

    ... preferred method), or by email to: [email protected] , or by mail to: EPA Docket Center (EPA/DC... in Agency burden is related to a mathematical error in the calculations, which led to double counting...

  15. Modeling and analysis of caves using voxelization

    NASA Astrophysics Data System (ADS)

    Szeifert, Gábor; Szabó, Tivadar; Székely, Balázs

    2014-05-01

    Although there are many ways to create three dimensional representations of caves using modern information technology methods, modeling of caves has been challenging for researchers for a long time. One of these promising new alternative modeling methods is using voxels. We are using geodetic measurements as an input for our voxelization project. These geodetic underground surveys recorded the azimuth, altitude and distance of corner points of cave systems relative to each other. The diameter of each cave section is estimated from separate databases originating from different surveys. We have developed a simple but efficient method (it covers more than 99.9 % of the volume of the input model on the average) to convert these vector-type datasets to voxels. We have also developed software components to make visualization of the voxel and vector models easier. Since each cornerpoint position is measured relative to another cornerpoints positions, propagation of uncertainties is an important issue in case of long caves with many separate sections. We are using Monte Carlo simulations to analyze the effect of the error of each geodetic instrument possibly involved in a survey. Cross-sections of the simulated three dimensional distributions show, that even tiny uncertainties of individual measurements can result in high variation of positions that could be reduced by distributing the closing errors if such data are available. Using the results of our simulations, we can estimate cave volume and the error of the calculated cave volume depending on the complexity of the cave. Acknowledgements: the authors are grateful to Ariadne Karst and Cave Exploring Association and State Department of Environmental and Nature Protection of the Hungarian Ministry of Rural Development, Department of National Parks and Landscape Protection, Section Landscape and Cave Protection and Ecotourism for providing the cave measurement data. BS contributed as an Alexander von Humboldt Research Fellow.

  16. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    PubMed Central

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033

  17. Protection of mammal diversity in Central America

    USGS Publications Warehouse

    Jenkins, Clinton N.; Giri, Chandra

    2008-01-01

    Central America is exceptionally rich in biodiversity, but varies widely in the attention its countries devote to conservation. Protected areas, widely considered the cornerstone of conservation, were not always created with the intent of conserving that biodiversity. We assessed how well the protected-area system of Central America includes the region's mammal diversity. This first required a refinement of existing range maps to reduce their extensive errors of commission (i.e., predicted presences in places where species do not occur). For this refinement, we used the ecological limits of each species to identify and remove unsuitable areas from the range. We then compared these maps with the locations of protected areas to measure the habitat protected for each of the region's 250 endemic mammals. The species most vulnerable to extinction—those with small ranges—were largely outside protected areas. Nevertheless, the most strictly protected areas tended toward areas with many small-ranged species. To improve the protection coverage of mammal diversity in the region, we identified a set of priority sites that would best complement the existing protected areas. Protecting these new sites would require a relatively small increase in the total area protected, but could greatly enhance mammal conservation.

  18. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    PubMed

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  19. 77 FR 28476 - Political Contributions by Certain Investment Advisers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-15

    ...] Political Contributions by Certain Investment Advisers AGENCY: Securities and Exchange Commission. ACTION... Investment Advisers Act of 1940 (``Advisers Act'') to correct an inadvertent error in the rule as published...-6787 or [email protected] , Office of Investment Adviser Regulation, Division of Investment Management, U...

  20. Positive events protect children from causal false memories for scripted events.

    PubMed

    Melinder, Annika; Toffalini, Enrico; Geccherle, Eleonora; Cornoldi, Cesare

    2017-11-01

    Adults produce fewer inferential false memories for scripted events when their conclusions are emotionally charged than when they are neutral, but it is not clear whether the same effect is also found in children. In the present study, we examined this issue in a sample of 132 children aged 6-12 years (mean 9 years, 3 months). Participants encoded photographs depicting six script-like events that had a positively, negatively, or a neutral valenced ending. Subsequently, true and false recognition memory of photographs related to the observed scripts was tested as a function of emotionality. Causal errors-a type of false memory thought to stem from inferential processes-were found to be affected by valence: children made fewer causal errors for positive than for neutral or negative events. Hypotheses are proposed on why adults were found protected against inferential false memories not only by positive (as for children) but also by negative endings when administered similar versions of the same paradigm.

  1. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    PubMed Central

    Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi

    2015-01-01

    Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195

  2. Enhanced fault-tolerant quantum computing in d-level systems.

    PubMed

    Campbell, Earl T

    2014-12-05

    Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.

  3. Modeling Security Aspects of Network

    NASA Astrophysics Data System (ADS)

    Schoch, Elmar

    With more and more widespread usage of computer systems and networks, dependability becomes a paramount requirement. Dependability typically denotes tolerance or protection against all kinds of failures, errors and faults. Sources of failures can basically be accidental, e.g., in case of hardware errors or software bugs, or intentional due to some kind of malicious behavior. These intentional, malicious actions are subject of security. A more complete overview on the relations between dependability and security can be found in [31]. In parallel to the increased use of technology, misuse also has grown significantly, requiring measures to deal with it.

  4. Geomasking sensitive health data and privacy protection: an evaluation using an E911 database.

    PubMed

    Allshouse, William B; Fitch, Molly K; Hampton, Kristen H; Gesink, Dionne C; Doherty, Irene A; Leone, Peter A; Serre, Marc L; Miller, William C

    2010-10-01

    Geomasking is used to provide privacy protection for individual address information while maintaining spatial resolution for mapping purposes. Donut geomasking and other random perturbation geomasking algorithms rely on the assumption of a homogeneously distributed population to calculate displacement distances, leading to possible under-protection of individuals when this condition is not met. Using household data from 2007, we evaluated the performance of donut geomasking in Orange County, North Carolina. We calculated the estimated k-anonymity for every household based on the assumption of uniform household distribution. We then determined the actual k-anonymity by revealing household locations contained in the county E911 database. Census block groups in mixed-use areas with high population distribution heterogeneity were the most likely to have privacy protection below selected criteria. For heterogeneous populations, we suggest tripling the minimum displacement area in the donut to protect privacy with a less than 1% error rate.

  5. Geomasking sensitive health data and privacy protection: an evaluation using an E911 database

    PubMed Central

    Allshouse, William B; Fitch, Molly K; Hampton, Kristen H; Gesink, Dionne C; Doherty, Irene A; Leone, Peter A; Serre, Marc L; Miller, William C

    2010-01-01

    Geomasking is used to provide privacy protection for individual address information while maintaining spatial resolution for mapping purposes. Donut geomasking and other random perturbation geomasking algorithms rely on the assumption of a homogeneously distributed population to calculate displacement distances, leading to possible under-protection of individuals when this condition is not met. Using household data from 2007, we evaluated the performance of donut geomasking in Orange County, North Carolina. We calculated the estimated k-anonymity for every household based on the assumption of uniform household distribution. We then determined the actual k-anonymity by revealing household locations contained in the county E911 database. Census block groups in mixed-use areas with high population distribution heterogeneity were the most likely to have privacy protection below selected criteria. For heterogeneous populations, we suggest tripling the minimum displacement area in the donut to protect privacy with a less than 1% error rate. PMID:20953360

  6. A novel chaotic stream cipher and its application to palmprint template protection

    NASA Astrophysics Data System (ADS)

    Li, Heng-Jian; Zhang, Jia-Shu

    2010-04-01

    Based on a coupled nonlinear dynamic filter (NDF), a novel chaotic stream cipher is presented in this paper and employed to protect palmprint templates. The chaotic pseudorandom bit generator (PRBG) based on a coupled NDF, which is constructed in an inverse flow, can generate multiple bits at one iteration and satisfy the security requirement of cipher design. Then, the stream cipher is employed to generate cancelable competitive code palmprint biometrics for template protection. The proposed cancelable palmprint authentication system depends on two factors: the palmprint biometric and the password/token. Therefore, the system provides high-confidence and also protects the user's privacy. The experimental results of verification on the Hong Kong PolyU Palmprint Database show that the proposed approach has a large template re-issuance ability and the equal error rate can achieve 0.02%. The performance of the palmprint template protection scheme proves the good practicability and security of the proposed stream cipher.

  7. Memory protection

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    Accidental overwriting of files or of memory regions belonging to other programs, browsing of personal files by superusers, Trojan horses, and viruses are examples of breakdowns in workstations and personal computers that would be significantly reduced by memory protection. Memory protection is the capability of an operating system and supporting hardware to delimit segments of memory, to control whether segments can be read from or written into, and to confine accesses of a program to its segments alone. The absence of memory protection in many operating systems today is the result of a bias toward a narrow definition of performance as maximum instruction-execution rate. A broader definition, including the time to get the job done, makes clear that cost of recovery from memory interference errors reduces expected performance. The mechanisms of memory protection are well understood, powerful, efficient, and elegant. They add to performance in the broad sense without reducing instruction execution rate.

  8. Multiplate Radiation Shields: Investigating Radiational Heating Errors

    NASA Astrophysics Data System (ADS)

    Richardson, Scott James

    1995-01-01

    Multiplate radiation shield errors are examined using the following techniques: (1) analytic heat transfer analysis, (2) optical ray tracing, (3) numerical fluid flow modeling, (4) laboratory testing, (5) wind tunnel testing, and (6) field testing. Guidelines for reducing radiational heating errors are given that are based on knowledge of the temperature sensor to be used, with the shield being chosen to match the sensor design. Small, reflective sensors that are exposed directly to the air stream (not inside a filter as is the case for many temperature and relative humidity probes) should be housed in a shield that provides ample mechanical and rain protection while impeding the air flow as little as possible; protection from radiation sources is of secondary importance. If a sensor does not meet the above criteria (i.e., is large or absorbing), then a standard Gill shield performs reasonably well. A new class of shields, called part-time aspirated multiplate radiation shields, are introduced. This type of shield consists of a multiplate design usually operated in a passive manner but equipped with a fan-forced aspiration capability to be used when necessary (e.g., low wind speed). The fans used here are 12 V DC that can be operated with a small dedicated solar panel. This feature allows the fan to operate when global solar radiation is high, which is when the largest radiational heating errors usually occur. A prototype shield was constructed and field tested and an example is given in which radiational heating errors were reduced from 2 ^circC to 1.2 ^circC. The fan was run continuously to investigate night-time low wind speed errors and the prototype shield reduced errors from 1.6 ^ circC to 0.3 ^circC. Part-time aspirated shields are an inexpensive alternative to fully aspirated shields and represent a good compromise between cost, power consumption, reliability (because they should be no worse than a standard multiplate shield if the fan fails), and accuracy. In addition, it is possible to modify existing passive shields to incorporate part-time aspiration, thus making them even more cost-effective. Finally, a new shield is described that incorporates a large diameter top plate that is designed to shade the lower portion of the shield. This shield increases flow through it by 60%, compared to the Gill design and it is likely to reduce radiational heating errors, although it has not been tested.

  9. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping

    NASA Astrophysics Data System (ADS)

    Kubica, Aleksander; Beverland, Michael E.; Brandão, Fernando; Preskill, John; Svore, Krysta M.

    2018-05-01

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p3DCC (1 )≃1.9 % and p3DCC (2 )≃27.6 % . We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  10. The potential for error in sampling

    Treesearch

    Jack Lewis

    2000-01-01

    Editor's note: The measurement of water quality parameters in environmental laboratories follows standard quality control protocols using methodologies approved by the U.S. Environmental Protection Agency. However, little attention has been given to quality assurance and quality control in activities outside the laboratory. This article describes some of those...

  11. Grades: Review of Academic Evaluations in Law Schools.

    ERIC Educational Resources Information Center

    Doniger, Thomas

    1980-01-01

    Lack of independent review process in professional schools and refusal of courts to review errors not resulting from arbitrariness, caprice, or bad faith leave student and societal interests in accurate grading inadequately protected. (Journal availability: University of the Pacific, 3201 Donner Way, Sacramento, CA 95817.) (MSE)

  12. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  13. Misconduct versus honest error and scientific disagreement.

    PubMed

    Resnik, David B; Stewart, C Neal

    2012-01-01

    Researchers sometimes mistakenly accuse their peers of misconduct. It is important to distinguish between misconduct and honest error or a difference of scientific opinion to prevent unnecessary and time-consuming misconduct proceedings, protect scientists from harm, and avoid deterring researchers from using novel methods or proposing controversial hypotheses. While it is obvious to many researchers that misconduct is different from a scientific disagreement or simply an inadvertent mistake in methods, analysis or misinterpretation of data, applying this distinction to real cases is sometimes not easy. Because the line between misconduct and honest error or a scientific dispute is often unclear, research organizations and institutions should distinguish between misconduct and honest error and scientific disagreement in their policies and practices. These distinctions should also be explained during educational sessions on the responsible conduct of research and in the mentoring process. When researchers wrongfully accuse their peers of misconduct, it is important to help them understand the distinction between misconduct and honest error and differences of scientific judgment or opinion, pinpoint the source of disagreement, and identify the relevant scientific norms. They can be encouraged to settle the dispute through collegial discussion and dialogue, rather than a misconduct allegation.

  14. Misconduct versus Honest Error and Scientific Disagreement

    PubMed Central

    Resnik, David B.; Stewart, C. Neal

    2012-01-01

    Researchers sometimes mistakenly accuse their peers of misconduct. It is important to distinguish between misconduct and honest error or a difference of scientific opinion to prevent unnecessary and time-consuming misconduct proceedings, protect scientists from harm, and avoid deterring researchers from using novel methods or proposing controversial hypotheses. While it is obvious to many researchers that misconduct is different from a scientific disagreement or simply an inadvertent mistake in methods, analysis or misinterpretation of data, applying this distinction to real cases is sometimes not easy. Because the line between misconduct and honest error or a scientific dispute is often unclear, research organizations and institutions should distinguish between misconduct and honest error and scientific disagreement in their policies and practices. These distinctions should also be explained during educational sessions on the responsible conduct of research and in the mentoring process. When researchers wrongfully accuse their peers of misconduct, it is important to help them understand the distinction between misconduct and honest error and differences of scientific judgment or opinion, pinpoint the source of disagreement, and identify the relevant scientific norms. They can be encouraged to settle the dispute through collegial discussion and dialogue, rather than a misconduct allegation. PMID:22268506

  15. Using digital inpainting to estimate incident light intensity for the calculation of red blood cell oxygen saturation from microscopy images.

    PubMed

    Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G

    2018-05-25

    Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  16. Combining Topological Hardware and Topological Software: Color-Code Quantum Computing with Topological Superconductor Networks

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; Kesselring, Markus S.; Eisert, Jens; von Oppen, Felix

    2017-07-01

    We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall-superconductor hybrids.

  17. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  18. Optimal Diabatic Dynamics of Majoarana-based Topological Qubits

    NASA Astrophysics Data System (ADS)

    Seradjeh, Babak; Rahmani, Armin; Franz, Marcel

    In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles such as Majorana zero modes and are protected from local environmental perturbations. This scheme requires slow operations. By using the Pontryagin's maximum principle, here we show the same quantum gates can be implemented in much shorter times through optimal diabatic pulses. While our fast diabatic gates no not enjoy topological protection, they provide significant practical advantages due to their optimal speed and remarkable robustness to calibration errors and noise. NSERC, CIfAR, NSF DMR- 1350663, BSF 2014345.

  19. Protecting quantum information in superconducting circuits

    NASA Astrophysics Data System (ADS)

    Devoret, Michel

    Can we prolong the coherence of a two-state manifold in a complex quantum system beyond the coherence of its longest-lived component? This question is the starting point in the construction of a scalable quantum computer. It translates in the search for processes that operate as some sort of Maxwell's demon and reliably correct the errors resulting from the coupling between qubits and their environment. The presentation will review recent experiments that test the dynamical protection by Josephson circuits of a logical qubit memory based on superpositions of particular coherent states of a superconducting resonator.

  20. [Medical errors from positions of mutual relations of patient-lawyer-doctor].

    PubMed

    Radysh, Ia F; Tsema, Ie V; Mehed', V P

    2013-01-01

    The basic theoretical and practical aspects of problem of malpractice in the system of health protection Ukraine are presented in the article. On specific examples the essence of the term "malpractice" is expounded. It was considered types of malpractice, conditions of beginning and kinds of responsibility to assumption of malpractice. The special attention to the legal and mental and ethical questions of problem from positions of protection of rights for a patient and medical worker is spared. The necessity of qualification malpractices on intentional and unintentional, possible and impermissible is grounded.

  1. Thermal and heat flow instrumentation for the space shuttle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Hartman, G. J.; Neuner, G. J.; Pavlosky, J.

    1974-01-01

    The 100 mission lifetime requirement for the space shuttle orbiter vehicle dictates a unique set of requirements for the Thermal Protection System (TPS) thermal and heat flow instrumentation. This paper describes the design and development of such instrumentation with emphasis on assessment of the accuracy of the measurements when the instrumentation is an integral part of the TPS. The temperature and heat flow sensors considered for this application are described and the optimum choices discussed. Installation techniques are explored and the resulting impact on the system error defined.

  2. Gaussian error correction of quantum states in a correlated noisy channel.

    PubMed

    Lassen, Mikael; Berni, Adriano; Madsen, Lars S; Filip, Radim; Andersen, Ulrik L

    2013-11-01

    Noise is the main obstacle for the realization of fault-tolerant quantum information processing and secure communication over long distances. In this work, we propose a communication protocol relying on simple linear optics that optimally protects quantum states from non-Markovian or correlated noise. We implement the protocol experimentally and demonstrate the near-ideal protection of coherent and entangled states in an extremely noisy channel. Since all real-life channels are exhibiting pronounced non-Markovian behavior, the proposed protocol will have immediate implications in improving the performance of various quantum information protocols.

  3. Explaining errors in children's questions.

    PubMed

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  4. 77 FR 4322 - Agency Information Collection Activities: Announcement of Board Approval Under Delegated...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-27

    ... statements of activity, the consumer's potential liability for unauthorized transfers, and error resolution... transfers to or from a consumer's account. Current Actions: On May 23, 2011, the Federal Reserve published a... proposal contained new protections for consumers who send remittance transfers to other consumers or...

  5. 40 CFR 92.107 - Fuel flow measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...

  6. 40 CFR 92.107 - Fuel flow measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...

  7. 40 CFR 92.107 - Fuel flow measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...

  8. 40 CFR 92.107 - Fuel flow measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...

  9. 40 CFR 92.107 - Fuel flow measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...

  10. Metal flame spray coating protects electrical cables in extreme environment

    NASA Technical Reports Server (NTRS)

    Brady, R. D.; Fox, H. A.

    1967-01-01

    Metal flame spray coating prevents EMF measurement error in sheathed instrumentation cables which are externally attached to cylinders which were cooled on the inside, but exposed to gamma radiation on the outside. The coating provides a thermoconductive path for radiation induced high temperatures within the cables.

  11. 78 FR 15876 - Activation of Ice Protection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-13

    ... procedures in the Airplane Flight Manual for operating in icing conditions must be initiated. (2) Visual cues... procedures in the Airplane Flight Manual for operating in icing conditions must be initiated. (3) If the... operating rules for flight in icing conditions. This document corrects an error in the amendatory language...

  12. An Efficient Silent Data Corruption Detection Method with Error-Feedback Control and Even Sampling for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Berrocal, Eduardo; Cappello, Franck

    The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less

  13. Cross Section Sensitivity and Propagated Errors in HZE Exposures

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.

    2005-01-01

    It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.

  14. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    PubMed

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  15. The genomic structure: proof of the role of non-coding DNA.

    PubMed

    Bouaynaya, Nidhal; Schonfeld, Dan

    2006-01-01

    We prove that the introns play the role of a decoy in absorbing mutations in the same way hollow uninhabited structures are used by the military to protect important installations. Our approach is based on a probability of error analysis, where errors are mutations which occur in the exon sequences. We derive the optimal exon length distribution, which minimizes the probability of error in the genome. Furthermore, to understand how can Nature generate the optimal distribution, we propose a diffusive random walk model for exon generation throughout evolution. This model results in an alpha stable exon length distribution, which is asymptotically equivalent to the optimal distribution. Experimental results show that both distributions accurately fit the real data. Given that introns also drive biological evolution by increasing the rate of unequal crossover between genes, we conclude that the role of introns is to maintain a genius balance between stability and adaptability in eukaryotic genomes.

  16. Postfabrication Phase Error Correction of Silicon Photonic Circuits by Single Femtosecond Laser Pulses

    DOE PAGES

    Bachman, Daniel; Chen, Zhijiang; Wang, Christopher; ...

    2016-11-29

    Phase errors caused by fabrication variations in silicon photonic integrated circuits are an important problem, which negatively impacts device yield and performance. This study reports our recent progress in the development of a method for permanent, postfabrication phase error correction of silicon photonic circuits based on femtosecond laser irradiation. Using beam shaping technique, we achieve a 14-fold enhancement in the phase tuning resolution of the method with a Gaussian-shaped beam compared to a top-hat beam. The large improvement in the tuning resolution makes the femtosecond laser method potentially useful for very fine phase trimming of silicon photonic circuits. Finally, wemore » also show that femtosecond laser pulses can directly modify silicon photonic devices through a SiO 2 cladding layer, making it the only permanent post-fabrication method that can tune silicon photonic circuits protected by an oxide cladding.« less

  17. A family of chaotic pure analog coding schemes based on baker's map function

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun

    2015-12-01

    This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.

  18. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping.

    PubMed

    Kubica, Aleksander; Beverland, Michael E; Brandão, Fernando; Preskill, John; Svore, Krysta M

    2018-05-04

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p_{3DCC}^{(1)}≃1.9% and p_{3DCC}^{(2)}≃27.6%. We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  19. Achieving the Heisenberg limit in quantum metrology using quantum error correction.

    PubMed

    Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang

    2018-01-08

    Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.

  20. Design of a robust baseband LPC coder for speech transmission over 9.6 kbit/s noisy channels

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Russell, W. H.; Higgins, A. L.

    1982-04-01

    This paper describes the design of a baseband Linear Predictive Coder (LPC) which transmits speech over 9.6 kbit/sec synchronous channels with random bit errors of up to 1%. Presented are the results of our investigation of a number of aspects of the baseband LPC coder with the goal of maximizing the quality of the transmitted speech. Important among these aspects are: bandwidth of the baseband, coding of the baseband residual, high-frequency regeneration, and error protection of important transmission parameters. The paper discusses these and other issues, presents the results of speech-quality tests conducted during the various stages of optimization, and describes the details of the optimized speech coder. This optimized speech coding algorithm has been implemented as a real-time full-duplex system on an array processor. Informal listening tests of the real-time coder have shown that the coder produces good speech quality in the absence of channel bit errors and introduces only a slight degradation in quality for channel bit error rates of up to 1%.

  1. 4D modeling in high-rise construction

    NASA Astrophysics Data System (ADS)

    Balakina, Anastasiya; Simankina, Tatyana; Lukinov, Vitaly

    2018-03-01

    High-rise construction is a complex construction process, requiring the use of more perfected and sophisticated tools for design, planning and construction management. The use of BIM-technologies allows minimizing the risks associated with design errors and errors that occur during construction. This article discusses a visual planning method using the 4D model, which allows the project team to create an accurate and complete construction plan, which is much more difficult to achieve with the help of traditional planning methods. The use of the 4D model in the construction of a 70-story building allowed to detect spatial and temporal errors before the start of construction work. In addition to identifying design errors, 4D modeling has allowed to optimize the construction, as follows: to optimize the operation of cranes, the placement of building structures and materials at various stages of construction, to optimize the organization of work performance, as well as to monitor the activities related to the preparation of the construction site for compliance with labor protection and safety requirements, which resulted in saving money and time.

  2. 75 FR 52446 - CBP Dec. 10-29; Technical Corrections to Customs and Border Protection Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-26

    ... that the importing and general public are aware of CBP programs, requirements, and procedures regarding... Stat. 2597. Therefore, in order to reflect the inclusion of clerical error, mistake of fact, or other... certification of origin import requirements under the United States-Chile Free Trade Agreement (CFTA), contains...

  3. Common data buffer

    NASA Technical Reports Server (NTRS)

    Byrne, F.

    1981-01-01

    Time-shared interface speeds data processing in distributed computer network. Two-level high-speed scanning approach routes information to buffer, portion of which is reserved for series of "first-in, first-out" memory stacks. Buffer address structure and memory are protected from noise or failed components by error correcting code. System is applicable to any computer or processing language.

  4. Maneuver Analysis and Targeting Strategy for the Stardust Re-Entry Capsule

    NASA Technical Reports Server (NTRS)

    Helfrich, Clifford E.; Bhat, Ram; Kangas, Julie; Wilson, Roby; Wong, Mau; Potts, Chris; Williams, Ken

    2006-01-01

    Stardust employed biased maneuvers to limit turns and minimize execution errors. Biased maneuvers also addressed planetary protection and safety issues. Stardust utilized a fixed-direction burn for the final maneuver to match the prevailing attitude so no turns were needed. Performance of the final burn was calibrated in flight.

  5. 7 CFR 97.121 - Corrected certificate-applicant's mistake.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... applicant of a clerical or typographical nature, or of minor character, or in the description of the variety (including, but not limited to, the use of a misleading variety name or a name assigned to a different... LABORATORY TESTING PROGRAMS PLANT VARIETY AND PROTECTION Correction of Errors in Certificate § 97.121...

  6. Helmet Electronics & Display System-Upgradeable Protection (HEaDS-UP) Phase III Assessment: Headgear Effects on Auditory Perception

    DTIC Science & Technology

    2013-11-01

    difference between front and rear was less pronounced. Localization errors near 0° and 180° are dominated by front-back confusions because binaural ...used to disambiguate binaural information; therefore, it can be argued that most differences in auditory localization ability resulting from

  7. 77 FR 42988 - Updating OSHA Construction Standards Based on National Consensus Standards; Head Protection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-23

    .... OSHA-2011-0184] RIN 1218-AC65 Updating OSHA Construction Standards Based on National Consensus... Administration (OSHA), Department of Labor. ACTION: Direct final rule; correction. SUMMARY: OSHA is correcting a... confusion resulting from a drafting error. OSHA published the DFR on June 22, 2012 (77 FR 37587). OSHA also...

  8. High-Threshold Fault-Tolerant Quantum Computation with Analog Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Fukui, Kosuke; Tomita, Akihisa; Okamoto, Atsushi; Fujii, Keisuke

    2018-04-01

    To implement fault-tolerant quantum computation with continuous variables, the Gottesman-Kitaev-Preskill (GKP) qubit has been recognized as an important technological element. However, it is still challenging to experimentally generate the GKP qubit with the required squeezing level, 14.8 dB, of the existing fault-tolerant quantum computation. To reduce this requirement, we propose a high-threshold fault-tolerant quantum computation with GKP qubits using topologically protected measurement-based quantum computation with the surface code. By harnessing analog information contained in the GKP qubits, we apply analog quantum error correction to the surface code. Furthermore, we develop a method to prevent the squeezing level from decreasing during the construction of the large-scale cluster states for the topologically protected, measurement-based, quantum computation. We numerically show that the required squeezing level can be relaxed to less than 10 dB, which is within the reach of the current experimental technology. Hence, this work can considerably alleviate this experimental requirement and take a step closer to the realization of large-scale quantum computation.

  9. Protecting Information

    NASA Astrophysics Data System (ADS)

    Loepp, Susan; Wootters, William K.

    2006-09-01

    For many everyday transmissions, it is essential to protect digital information from noise or eavesdropping. This undergraduate introduction to error correction and cryptography is unique in devoting several chapters to quantum cryptography and quantum computing, thus providing a context in which ideas from mathematics and physics meet. By covering such topics as Shor's quantum factoring algorithm, this text informs the reader about current thinking in quantum information theory and encourages an appreciation of the connections between mathematics and science.Of particular interest are the potential impacts of quantum physics:(i) a quantum computer, if built, could crack our currently used public-key cryptosystems; and (ii) quantum cryptography promises to provide an alternative to these cryptosystems, basing its security on the laws of nature rather than on computational complexity. No prior knowledge of quantum mechanics is assumed, but students should have a basic knowledge of complex numbers, vectors, and matrices. Accessible to readers familiar with matrix algebra, vector spaces and complex numbers First undergraduate text to cover cryptography, error-correction, and quantum computation together Features exercises designed to enhance understanding, including a number of computational problems, available from www.cambridge.org/9780521534765

  10. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    PubMed Central

    Clark, Kevin B.

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987

  11. Strain transfer analysis of optical fiber based sensors embedded in an asphalt pavement structure

    NASA Astrophysics Data System (ADS)

    Wang, Huaping; Xiang, Ping

    2016-07-01

    Asphalt pavement is vulnerable to random damage, such as cracking and rutting, which can be proactively identified by distributed optical fiber sensing technology. However, due to the material nature of optical fibers, a bare fiber is apt to be damaged during the construction process of pavements. Thus, a protective layer is needed for this application. Unfortunately, part of the strain of the host material is absorbed by the protective layer when transferring the strain to the sensing fiber. To account for the strain transfer error, in this paper a theoretical analysis of the strain transfer of a three-layered general model has been carried out by introducing Goodman’s hypothesis to describe the interfacial shear stress relationship. The model considers the viscoelastic behavior of the host material and protective layer. The effects of one crack in the host material and the sensing length on strain transfer relationship are been discussed. To validate the effectiveness of the strain transfer analysis, a flexible asphalt-mastic packaged distributed optical fiber sensor was designed and tested in a laboratory environment to monitor the distributed strain and appearance of cracks in an asphalt concrete beam at two different temperatures. The experimental results indicated that the developed strain transfer formula can significantly reduce the strain transfer error, and that the asphalt-mastic packaged optical fiber sensor can successfully monitor the distributed strain and identify local cracks.

  12. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, David D.; Napolitano, Jr., Leonard M.; Redinbo, G. Robert; Shreeve, William O.

    1994-01-01

    An internally fault-tolerant data error detection and correction integrated circuit device (10) and a method of operating same. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum is provided with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented.

  13. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, D.D.; Napolitano, L.M. Jr.; Redinbo, G.R.; Shreeve, W.O.

    1994-03-01

    An internally fault-tolerant data error detection and correction integrated circuit device and a method of operating same is described. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented. 8 figures.

  14. Method for reducing measurement errors of a Langmuir probe with a protective RF shield

    NASA Astrophysics Data System (ADS)

    Riaby, V.; Masherov, P.; Savinov, V.; Yakunin, V.

    2018-04-01

    Probe measurements were conducted in the middle cross-section of an inductive, low-pressure xenon plasma using a straight cylindrical Langmuir probe with a bare metal shield that protected the probe from radio frequency interference. As a result, reliable radial distributions of the plasma parameters were obtained. Subsequent analyses of these measurements revealed that the electron energy distribution function (EEDF) deviated substantially from the Maxwellian functions and that this deviation depended on the length of the probe shield. To evaluate the shield's influence on the measurement results, in addition to the probe (which was moved radially as its shield length varied in the range of lsh1 = lmax-0), an additional L-shaped probe was inserted at a different location. This probe was moved differently from the first probe and provided confirmational measurements in the common special position where lsh1 = 0 and lsh2 ≠ 0. In this position, the second shield decreased all the plasma parameters. A comparison of the probe datasets identified the principles of the relationships between measurement errors and EEDF distortions caused by the bare probe shields. This dependence was used to correct the measurements performed using the first probe by eliminating the influence of its shield. Physical analyses based on earlier studies showed that these peculiarities are caused by a short-circuited double-probe effect that occurs in bare metal probe protective shields.

  15. Error suppression and correction for quantum annealing

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel

    While adiabatic quantum computing and quantum annealing enjoy a certain degree of inherent robustness against excitations and control errors, there is no escaping the need for error correction or suppression. In this talk I will give an overview of our work on the development of such error correction and suppression methods. We have experimentally tested one such method combining encoding, energy penalties and decoding, on a D-Wave Two processor, with encouraging results. Mean field theory shows that this can be explained in terms of a softening of the closing of the gap due to the energy penalty, resulting in protection against excitations that occur near the quantum critical point. Decoding recovers population from excited states and enhances the success probability of quantum annealing. Moreover, we have demonstrated that using repetition codes with increasing code distance can lower the effective temperature of the annealer. References: K.L. Pudenz, T. Albash, D.A. Lidar, ``Error corrected quantum annealing with hundreds of qubits'', Nature Commun. 5, 3243 (2014). K.L. Pudenz, T. Albash, D.A. Lidar, ``Quantum annealing correction for random Ising problems'', Phys. Rev. A. 91, 042302 (2015). S. Matsuura, H. Nishimori, T. Albash, D.A. Lidar, ``Mean Field Analysis of Quantum Annealing Correction''. arXiv:1510.07709. W. Vinci et al., in preparation.

  16. The analysis and compensation of errors of precise simple harmonic motion control under high speed and large load conditions based on servo electric cylinder

    NASA Astrophysics Data System (ADS)

    Ma, Chen-xi; Ding, Guo-qing

    2017-10-01

    Simple harmonic waves and synthesized simple harmonic waves are widely used in the test of instruments. However, because of the errors caused by clearance of gear and time-delay error of FPGA, it is difficult to control servo electric cylinder in precise simple harmonic motion under high speed, high frequency and large load conditions. To solve the problem, a method of error compensation is proposed in this paper. In the method, a displacement sensor is fitted on the piston rod of the electric cylinder. By using the displacement sensor, the real-time displacement of the piston rod is obtained and fed back to the input of servo motor, then a closed loop control is realized. There is compensation of pulses in the next period of the synthetic waves. This paper uses FPGA as the processing core. The software mainly comprises a waveform generator, an Ethernet module, a memory module, a pulse generator, a pulse selector, a protection module, an error compensation module. A durability of shock absorbers is used as the testing platform. The durability mainly comprises a single electric cylinder, a servo motor for driving the electric cylinder, and the servo motor driver.

  17. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  18. Tirilazad mesylate protects stored erythrocytes against osmotic fragility.

    PubMed

    Epps, D E; Knechtel, T J; Bacznskyj, O; Decker, D; Guido, D M; Buxser, S E; Mathews, W R; Buffenbarger, S L; Lutzke, B S; McCall, J M

    1994-12-01

    The hypoosmotic lysis curve of freshly collected human erythrocytes is consistent with a single Gaussian error function with a mean of 46.5 +/- 0.25 mM NaCl and a standard deviation of 5.0 +/- 0.4 mM NaCl. After extended storage of RBCs under standard blood bank conditions the lysis curve conforms to the sum of two error functions instead of a possible shift in the mean and a broadening of a single error function. Thus, two distinct sub-populations with different fragilities are present instead of a single, broadly distributed population. One population is identical to the freshly collected erythrocytes, whereas the other population consists of osmotically fragile cells. The rate of generation of the new, osmotically fragile, population of cells was used to probe the hypothesis that lipid peroxidation is responsible for the induction of membrane fragility. If it is so, then the antioxidant, tirilazad mesylate (U-74,006f), should protect against this degradation of stored erythrocytes. We found that tirilazad mesylate, at 17 microM (1.5 mol% with respect to membrane lecithin), retards significantly the formation of the osmotically fragile RBCs. Concomitantly, the concentration of free hemoglobin which accumulates during storage is markedly reduced by the drug. Since the presence of the drug also decreases the amount of F2-isoprostanes formed during the storage period, an antioxidant mechanism must be operative. These results demonstrate that tirilazad mesylate significantly decreases the number of fragile erythrocytes formed during storage in the blood bank.

  19. The issue of multiple univariate comparisons in the context of neuroelectric brain mapping: an application in a neuromarketing experiment.

    PubMed

    Vecchiato, G; De Vico Fallani, F; Astolfi, L; Toppi, J; Cincotti, F; Mattia, D; Salinari, S; Babiloni, F

    2010-08-30

    This paper presents some considerations about the use of adequate statistical techniques in the framework of the neuroelectromagnetic brain mapping. With the use of advanced EEG/MEG recording setup involving hundred of sensors, the issue of the protection against the type I errors that could occur during the execution of hundred of univariate statistical tests, has gained interest. In the present experiment, we investigated the EEG signals from a mannequin acting as an experimental subject. Data have been collected while performing a neuromarketing experiment and analyzed with state of the art computational tools adopted in specialized literature. Results showed that electric data from the mannequin's head presents statistical significant differences in power spectra during the visualization of a commercial advertising when compared to the power spectra gathered during a documentary, when no adjustments were made on the alpha level of the multiple univariate tests performed. The use of the Bonferroni or Bonferroni-Holm adjustments returned correctly no differences between the signals gathered from the mannequin in the two experimental conditions. An partial sample of recently published literature on different neuroscience journals suggested that at least the 30% of the papers do not use statistical protection for the type I errors. While the occurrence of type I errors could be easily managed with appropriate statistical techniques, the use of such techniques is still not so largely adopted in the literature. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  20. Algorithm-Based Fault Tolerance Integrated with Replication

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Rennels, David

    2008-01-01

    In a proposed approach to programming and utilization of commercial off-the-shelf computing equipment, a combination of algorithm-based fault tolerance (ABFT) and replication would be utilized to obtain high degrees of fault tolerance without incurring excessive costs. The basic idea of the proposed approach is to integrate ABFT with replication such that the algorithmic portions of computations would be protected by ABFT, and the logical portions by replication. ABFT is an extremely efficient, inexpensive, high-coverage technique for detecting and mitigating faults in computer systems used for algorithmic computations, but does not protect against errors in logical operations surrounding algorithms.

  1. Learning from adverse incidents involving medical devices.

    PubMed

    Amoore, John; Ingram, Paula

    While an adverse event involving a medical device is often ascribed to either user error or device failure, the causes are typically multifactorial. A number of incidents involving medical devices are explored using this approach to investigate the various causes of the incident and the protective barriers that minimised or prevented adverse consequences. User factors, including mistakes, omissions and lack of training, conspired with background factors--device controls and device design, storage conditions, hidden device damage and physical layout of equipment when in use--to cause the adverse events. Protective barriers that prevented or minimised the consequences included staff vigilance, operating procedures and alarms.

  2. 21st Century Skin Findings Response.

    PubMed

    Reese, V; Croley, J A; Ryan, M P; Wagner, R F

    2018-04-28

    We read of interest the letter by Ishida et al, "Skin Findings of 21 st Century Movie Characters." 1 The authors conclude that the prevalence of movie villains with cutaneous lesions in cinema since 2000 is lower than films released in the 20 th century. Reviewing their examples, we note some frank errors in the data presented. Immortan Joe from "Mad Max: Fury Road" is listed as having a "lip deficit." This is due to trauma and under his breathing apparatus, there is marked scarring. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. Challenges and Plans for Injection and Beam Dump

    NASA Astrophysics Data System (ADS)

    Barnes, M.; Goddard, B.; Mertens, V.; Uythoven, J.

    The injection and beam dumping systems of the LHC will need to be upgraded to comply with the requirements of operation with the HL-LHC beams. The elements of the injection system concerned are the fixed and movable absorbers which protect the LHC in case of an injection kicker error and the injection kickers themselves. The beam dumping system elements under study are the absorbers which protect the aperture in case of an asynchronous beam dump and the beam absorber block. The operational limits of these elements and the new developments in the context of the HL-LHC project are described.

  4. Second-order shaped pulsed for solid-state quantum computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Pinaki

    2008-01-01

    We present the construction and detailed analysis of highly optimized self-refocusing pulse shapes for several rotation angles. We characterize the constructed pulses by the coefficients appearing in the Magnus expansion up to second order. This allows a semianalytical analysis of the performance of the constructed shapes in sequences and composite pulses by computing the corresponding leading-order error operators. Higher orders can be analyzed with the numerical technique suggested by us previously. We illustrate the technique by analyzing several composite pulses designed to protect against pulse amplitude errors, and on decoupling sequences for potentially long chains of qubits with on-site andmore » nearest-neighbor couplings.« less

  5. Holonomic surface codes for fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiang; Devitt, Simon J.; You, J. Q.; Nori, Franco

    2018-02-01

    Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.

  6. A fingerprint key binding algorithm based on vector quantization and error correction

    NASA Astrophysics Data System (ADS)

    Li, Liang; Wang, Qian; Lv, Ke; He, Ning

    2012-04-01

    In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

  7. Robust symmetry-protected metrology with the Haldane phase

    NASA Astrophysics Data System (ADS)

    Bartlett, Stephen D.; Brennen, Gavin K.; Miyake, Akimasa

    2018-01-01

    We propose a metrology scheme that is made robust to a wide range of noise processes by using the passive, error-preventing properties of symmetry-protected topological phases. The so-called fractionalized edge mode of an antiferromagnetic Heisenberg spin-1 chain in a rotationally- symmetric Haldane phase can be used to measure the direction of an unknown electric field, by exploiting the way in which the field direction reduces the symmetry of the chain. Specifically, the direction (and when supplementing with a known background field, also the strength) of the field is registered in the holonomy under an adiabatic sensing protocol, and the degenerate fractionalized edge mode is protected through this process by the remaining reduced symmetry. We illustrate the scheme with respect to a potential realization by Rydberg dressed atoms.

  8. Task errors by emergency physicians are associated with interruptions, multitasking, fatigue and working memory capacity: a prospective, direct observation study.

    PubMed

    Westbrook, Johanna I; Raban, Magdalena Z; Walter, Scott R; Douglas, Heather

    2018-01-09

    Interruptions and multitasking have been demonstrated in experimental studies to reduce individuals' task performance. These behaviours are frequently used by clinicians in high-workload, dynamic clinical environments, yet their effects have rarely been studied. To assess the relative contributions of interruptions and multitasking by emergency physicians to prescribing errors. 36 emergency physicians were shadowed over 120 hours. All tasks, interruptions and instances of multitasking were recorded. Physicians' working memory capacity (WMC) and preference for multitasking were assessed using the Operation Span Task (OSPAN) and Inventory of Polychronic Values. Following observation, physicians were asked about their sleep in the previous 24 hours. Prescribing errors were used as a measure of task performance. We performed multivariate analysis of prescribing error rates to determine associations with interruptions and multitasking, also considering physician seniority, age, psychometric measures, workload and sleep. Physicians experienced 7.9 interruptions/hour. 28 clinicians were observed prescribing 239 medication orders which contained 208 prescribing errors. While prescribing, clinicians were interrupted 9.4 times/hour. Error rates increased significantly if physicians were interrupted (rate ratio (RR) 2.82; 95% CI 1.23 to 6.49) or multitasked (RR 1.86; 95% CI 1.35 to 2.56) while prescribing. Having below-average sleep showed a >15-fold increase in clinical error rate (RR 16.44; 95% CI 4.84 to 55.81). WMC was protective against errors; for every 10-point increase on the 75-point OSPAN, a 19% decrease in prescribing errors was observed. There was no effect of polychronicity, workload, physician gender or above-average sleep on error rates. Interruptions, multitasking and poor sleep were associated with significantly increased rates of prescribing errors among emergency physicians. WMC mitigated the negative influence of these factors to an extent. These results confirm experimental findings in other fields and raise questions about the acceptability of the high rates of multitasking and interruption in clinical environments. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. 78 FR 60379 - Board of Veterans Appeals, Voice of the Veteran Appellant Surveys; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-01

    ... collection notice in a Federal Register on September 6, 2013 (78 FR 54956), that contained errors. VA... 20420, at (202) 632-7492 or [email protected] . Correction In FR Doc. 2013-21699, published on September 6, 2013, at 78 FR 54956, make the following corrections. On page 54956, in the third column, at...

  10. 76 FR 4601 - Determinations Concerning Need for Error Correction, Partial Approval and Partial Disapproval...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-26

    ... submitting comments. E-mail: epa.gov ">[email protected] epa.gov . Fax: (202) 566-9744. Mail: Attention Docket..., Room 3334, Washington, DC 20004, Attention Docket ID No. EPA-HQ-OAR-2010-1033. Such deliveries are only..., EPA West Building, Room 3334, 1301 Constitution Ave., NW., Washington, DC. The Public Reading Room is...

  11. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array

    PubMed Central

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Tao, Yuan

    2018-01-01

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%. PMID:29734742

  12. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array.

    PubMed

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Abu-Siada, Ahmed; Tao, Yuan

    2018-05-05

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%.

  13. Comparison of estimated and observed stormwater runoff for fifteen watersheds in west-central Florida, using five common design techniques

    USGS Publications Warehouse

    Trommer, J.T.; Loper, J.E.; Hammett, K.M.; Bowman, Georgia

    1996-01-01

    Hydrologists use several traditional techniques for estimating peak discharges and runoff volumes from ungaged watersheds. However, applying these techniques to watersheds in west-central Florida requires that empirical relationships be extrapolated beyond tested ranges. As a result there is some uncertainty as to their accuracy. Sixty-six storms in 15 west-central Florida watersheds were modeled using (1) the rational method, (2) the U.S. Geological Survey regional regression equations, (3) the Natural Resources Conservation Service (formerly the Soil Conservation Service) TR-20 model, (4) the Army Corps of Engineers HEC-1 model, and (5) the Environmental Protection Agency SWMM model. The watersheds ranged between fully developed urban and undeveloped natural watersheds. Peak discharges and runoff volumes were estimated using standard or recommended methods for determining input parameters. All model runs were uncalibrated and the selection of input parameters was not influenced by observed data. The rational method, only used to calculate peak discharges, overestimated 45 storms, underestimated 20 storms and estimated the same discharge for 1 storm. The mean estimation error for all storms indicates the method overestimates the peak discharges. Estimation errors were generally smaller in the urban watersheds and larger in the natural watersheds. The U.S. Geological Survey regression equations provide peak discharges for storms of specific recurrence intervals. Therefore, direct comparison with observed data was limited to sixteen observed storms that had precipitation equivalent to specific recurrence intervals. The mean estimation error for all storms indicates the method overestimates both peak discharges and runoff volumes. Estimation errors were smallest for the larger natural watersheds in Sarasota County, and largest for the small watersheds located in the eastern part of the study area. The Natural Resources Conservation Service TR-20 model, overestimated peak discharges for 45 storms and underestimated 21 storms, and overestimated runoff volumes for 44 storms and underestimated 22 storms. The mean estimation error for all storms modeled indicates that the model overestimates peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. The HEC-1 model overestimated peak discharge rates for 55 storms and underestimated 11 storms. Runoff volumes were overestimated for 44 storms and underestimated for 22 storms using the Army Corps of Engineers HEC-1 model. The mean estimation error for all the storms modeled indicates that the model overestimates peak discharge rates and runoff volumes. Generally, the smaller estimation errors in peak discharges were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. Estimation errors in runoff volumes; however, were smallest for the 3 natural watersheds located in the southernmost part of Sarasota County. The Environmental Protection Agency Storm Water Management model produced similar peak discharges and runoff volumes when using both the Green-Ampt and Horton infiltration methods. Estimated peak discharge and runoff volume data calculated with the Horton method was only slightly higher than those calculated with the Green-Ampt method. The mean estimation error for all the storms modeled indicates the model using the Green-Ampt infiltration method overestimates peak discharges and slightly underestimates runoff volumes. Using the Horton infiltration method, the model overestimates both peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the five natural watersheds in Sarasota County with the least amount of impervious cover and the lowest slopes. The largest er

  14. Inferential false memories of events: negative consequences protect from distortions when the events are free from further elaboration.

    PubMed

    Mirandola, Chiara; Toffalini, Enrico; Grassano, Massimo; Cornoldi, Cesare; Melinder, Annika

    2014-01-01

    The present experiment was conducted to investigate whether negative emotionally charged and arousing content of to-be-remembered scripted material would affect propensity towards memory distortions. We further investigated whether elaboration of the studied material through free recall would affect the magnitude of memory errors. In this study participants saw eight scripts. Each of the scripts included an effect of an action, the cause of which was not presented. Effects were either negatively emotional or neutral. Participants were assigned to either a yes/no recognition test group (recognition), or to a recall and yes/no recognition test group (elaboration + recognition). Results showed that participants in the recognition group produced fewer memory errors in the emotional condition. Conversely, elaboration + recognition participants had lower accuracy and produced more emotional memory errors than the other group, suggesting a mediating role of semantic elaboration on the generation of false memories. The role of emotions and semantic elaboration on the generation of false memories is discussed.

  15. Iterative channel decoding of FEC-based multiple-description codes.

    PubMed

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  16. Analytical N beam position monitor method

    NASA Astrophysics Data System (ADS)

    Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.

    2017-11-01

    Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.

  17. Disinfection of reusable elastomeric respirators by health care workers: a feasibility study and development of standard operating procedures.

    PubMed

    Bessesen, Mary T; Adams, Jill C; Radonovich, Lewis; Anderson, Judith

    2015-06-01

    This was a feasibility study in a Department of Veterans Affairs Medical Center to develop a standard operating procedure (SOP) to be used by health care workers to disinfect reusable elastomeric respirators under pandemic conditions. Registered and licensed practical nurses, nurse practitioners, aides, clinical technicians, and physicians took part in the study. Health care worker volunteers were provided with manufacturers' cleaning and disinfection instructions and all necessary supplies. They were observed and filmed. SOPs were developed, based on these observations, and tested on naïve volunteer health care workers. Error rates using manufacturers' instructions and SOPs were compared. When using respirator manufacturers' cleaning and disinfection instructions, without specific training or supervision, all subjects made multiple errors. When using the SOPs developed in the study, without specific training or guidance, naïve health care workers disinfected respirators with zero errors. Reusable facial protective equipment may be disinfected by health care workers with minimal training using SOPs. Published by Elsevier Inc.

  18. Towards Holography via Quantum Source-Channel Codes.

    PubMed

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-14

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  19. Simple scheme for encoding and decoding a qubit in unknown state for various topological codes

    PubMed Central

    Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał

    2015-01-01

    We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905

  20. Towards Holography via Quantum Source-Channel Codes

    NASA Astrophysics Data System (ADS)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  1. Toward Biopredictive Dissolution for Enteric Coated Dosage Forms.

    PubMed

    Al-Gousous, J; Amidon, G L; Langguth, P

    2016-06-06

    The aim of this work was to develop a phosphate buffer based dissolution method for enteric-coated formulations with improved biopredictivity for fasted conditions. Two commercially available enteric-coated aspirin products were used as model formulations (Aspirin Protect 300 mg, and Walgreens Aspirin 325 mg). The disintegration performance of these products in a physiological 8 mM pH 6.5 bicarbonate buffer (representing the conditions in the proximal small intestine) was used as a standard to optimize the employed phosphate buffer molarity. To account for the fact that a pH and buffer molarity gradient exists along the small intestine, the introduction of such a gradient was proposed for products with prolonged lag times (when it leads to a release lower than 75% in the first hour post acid stage) in the proposed buffer. This would allow the method also to predict the performance of later-disintegrating products. Dissolution performance using the accordingly developed method was compared to that observed when using two well-established dissolution methods: the United States Pharmacopeia (USP) method and blank fasted state simulated intestinal fluid (FaSSIF). The resulting dissolution profiles were convoluted using GastroPlus software to obtain predicted pharmacokinetic profiles. A pharmacokinetic study on healthy human volunteers was performed to evaluate the predictions made by the different dissolution setups. The novel method provided the best prediction, by a relatively wide margin, for the difference between the lag times of the two tested formulations, indicating its being able to predict the post gastric emptying onset of drug release with reasonable accuracy. Both the new and the blank FaSSIF methods showed potential for establishing in vitro-in vivo correlation (IVIVC) concerning the prediction of Cmax and AUC0-24 (prediction errors not more than 20%). However, these predictions are strongly affected by the highly variable first pass metabolism necessitating the evaluation of an absorption rate metric that is more independent of the first-pass effect. The Cmax/AUC0-24 ratio was selected for this purpose. Regarding this metric's predictions, the new method provided very good prediction of the two products' performances relative to each other (only 1.05% prediction error in this regard), while its predictions for the individual products' values in absolute terms were borderline, narrowly missing the regulatory 20% prediction error limits (21.51% for Aspirin Protect and 22.58% for Walgreens Aspirin). The blank FaSSIF-based method provided good Cmax/AUC0-24 ratio prediction, in absolute terms, for Aspirin Protect (9.05% prediction error), but its prediction for Walgreens Aspirin (33.97% prediction error) was overwhelmingly poor. Thus it gave practically the same average but much higher maximum prediction errors compared to the new method, and it was strongly overdiscriminating as for predicting their performances relative to one another. The USP method, despite not being overdiscriminating, provided poor predictions of the individual products' Cmax/AUC0-24 ratios. This indicates that, overall, the new method is of improved biopredictivity compared to established methods.

  2. Development of TPS flight test and operational instrumentation

    NASA Technical Reports Server (NTRS)

    Carnahan, K. R.; Hartman, G. J.; Neuner, G. J.

    1975-01-01

    Thermal and flow sensor instrumentation was developed for use as an integral part of the space shuttle orbiter reusable thermal protection system. The effort was performed in three tasks: a study to determine the optimum instruments and instrument installations for the space shuttle orbiter RSI and RCC TPS; tests and/or analysis to determine the instrument installations to minimize measurement errors; and analysis using data from the test program for comparison to analytical methods. A detailed review of existing state of the art instrumentation in industry was performed to determine the baseline for the departure of the research effort. From this information, detailed criteria for thermal protection system instrumentation were developed.

  3. False alarms and missed events: the impact and origins of perceived inaccuracy in tornado warning systems.

    PubMed

    Ripberger, Joseph T; Silva, Carol L; Jenkins-Smith, Hank C; Carlson, Deven E; James, Mark; Herron, Kerry G

    2015-01-01

    Theory and conventional wisdom suggest that errors undermine the credibility of tornado warning systems and thus decrease the probability that individuals will comply (i.e., engage in protective action) when future warnings are issued. Unfortunately, empirical research on the influence of warning system accuracy on public responses to tornado warnings is incomplete and inconclusive. This study adds to existing research by analyzing two sets of relationships. First, we assess the relationship between perceptions of accuracy, credibility, and warning response. Using data collected via a large regional survey, we find that trust in the National Weather Service (NWS; the agency responsible for issuing tornado warnings) increases the likelihood that an individual will opt for protective action when responding to a hypothetical warning. More importantly, we find that subjective perceptions of warning system accuracy are, as theory suggests, systematically related to trust in the NWS and (by extension) stated responses to future warnings. The second half of the study matches survey data against NWS warning and event archives to investigate a critical follow-up question--Why do some people perceive that their warning system is accurate, whereas others perceive that their system is error prone? We find that subjective perceptions are--in part-a function of objective experience, knowledge, and demographic characteristics. When considered in tandem, these findings support the proposition that errors influence perceptions about the accuracy of warning systems, which in turn impact the credibility that people assign to information provided by systems and, ultimately, public decisions about how to respond when warnings are issued. © 2014 Society for Risk Analysis.

  4. Aging and the Vulnerability of Speech to Dual Task Demands

    PubMed Central

    Kemper, Susan; Schmalzried, RaLynn; Hoffman, Lesa; Herman, Ruth

    2010-01-01

    Tracking a digital pursuit rotor task was used to measure dual task costs of language production by young and older adults. Tracking performance by both groups was affected by dual task demands: time on target declined and tracking error increased as dual task demands increased from the baseline condition to a moderately demanding dual task condition to a more demanding dual task condition. When dual task demands were moderate, older adults’ speech rate declined but their fluency, grammatical complexity, and content were unaffected. When the dual task was more demanding, older adults’ speech, like young adults’ speech, became highly fragmented, ungrammatical, and incoherent. Vocabulary, working memory, processing speed, and inhibition affected vulnerability to dual task costs: vocabulary provided some protection for sentence length and grammaticality, working memory conferred some protection for grammatical complexity, and processing speed provided some protection for speech rate, propositional density, coherence, and lexical diversity. Further, vocabulary and working memory capacity provided more protection for older adults than for young adults although the protective effect of processing speed was somewhat reduced for older adults as compared to the young adults. PMID:21186917

  5. 19 CFR 177.12 - Modification or revocation of interpretive rulings, protest review decisions, and previous...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 2 2011-04-01 2011-04-01 false Modification or revocation of interpretive rulings... 177.12 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT... 174 of this chapter, if found to be in error or not in accord with the current views of Customs, may...

  6. 19 CFR 177.12 - Modification or revocation of interpretive rulings, protest review decisions, and previous...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 2 2013-04-01 2013-04-01 false Modification or revocation of interpretive rulings... 177.12 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT... 174 of this chapter, if found to be in error or not in accord with the current views of Customs, may...

  7. 19 CFR 177.12 - Modification or revocation of interpretive rulings, protest review decisions, and previous...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 2 2012-04-01 2012-04-01 false Modification or revocation of interpretive rulings... 177.12 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT... 174 of this chapter, if found to be in error or not in accord with the current views of Customs, may...

  8. 19 CFR 177.12 - Modification or revocation of interpretive rulings, protest review decisions, and previous...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 2 2014-04-01 2014-04-01 false Modification or revocation of interpretive rulings... 177.12 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT... 174 of this chapter, if found to be in error or not in accord with the current views of Customs, may...

  9. 78 FR 12117 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Amending...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-21

    ... designed to protect investors and the public interest. Granting Market Makers more time to request a review... addresses errors in series with zero or no bid. Specifically, the Exchange proposes replacing reference to ``series quoted no bid on the Exchange'' with ``series where the NBBO bid is zero.'' This is being done to...

  10. 2011 drug packaging review: too many dangers and too many patients overlooked.

    PubMed

    2012-05-01

    Every year, Prescrire's analysis of drug packaging confirms the importance of taking packaging into account in assessing a drug's harm-benefit balance. Safe, tried and true options are available, yet the quality of most of the drug packaging Prescrire examined in 2011 left much to be desired. Few of the packaging items examined help prevent medication errors and many actually increase the risks: misleading and confusing labelling, dosing devices that create a risk of overdose, bottles without a child-proof cap, and inadequate or dangerous patient information leaflets. Umbrella brands continue to expand and are a potential source of medication errors. Some patients are at greater risk: the patient leaflets for NSAIDs endanger pregnant women and their unborn babies; children are insufficiently protected by paediatric packaging and are at risk due to the lack of child-proof caps on too many bottles. The raft of regulatory measures taken by the French drug regulatory agency (Afssaps) in the aftermath of the Mediator disaster overlooked the importance of packaging. Until drug regulatory agencies tackle the vast issue of drug packaging, it is up to healthcare professionals to protect patients from harm.

  11. Using lean to improve medication administration safety: in search of the "perfect dose".

    PubMed

    Ching, Joan M; Long, Christina; Williams, Barbara L; Blackmore, C Craig

    2013-05-01

    At Virginia Mason Medical Center (Seattle), the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study was used in combination with Lean quality improvement efforts to address medication administration safety. Lean interventions were targeted at improving the medication room layout, applying visual controls, and implementing nursing standard work. The interventions were designed to prevent medication administration errors through improving six safe practices: (1) comparing medication with medication administration record, (2) labeling medication, (3) checking two forms of patient identification, (4) explaining medication to patient, (5) charting medication immediately, and (6) protecting the process from distractions/interruptions. Trained nurse auditors observed 9,244 doses for 2,139 patients. Following the intervention, the number of safe-practice violations decreased from 83 violations/100 doses at baseline (January 2010-March 2010) to 42 violations/100 doses at final follow-up (July 2011-September 2011), resulting in an absolute risk reduction of 42 violations/100 doses (95% confidence interval [CI]: 35-48), p < .001). The number of medication administration errors decreased from 10.3 errors/100 doses at baseline to 2.8 errors/100 doses at final follow-up (absolute risk reduction: 7 violations/100 doses [95% CI: 5-10, p < .001]). The "perfect dose" score, reflecting compliance with all six safe practices and absence of any of the eight medication administration errors, improved from 37 in compliance/100 doses at baseline to 68 in compliance/100 doses at the final follow-up. Lean process improvements coupled with direct observation can contribute to substantial decreases in errors in nursing medication administration.

  12. Error suppression for Hamiltonian quantum computing in Markovian environments

    NASA Astrophysics Data System (ADS)

    Marvian, Milad; Lidar, Daniel A.

    2017-03-01

    Hamiltonian quantum computing, such as the adiabatic and holonomic models, can be protected against decoherence using an encoding into stabilizer subspace codes for error detection and the addition of energy penalty terms. This method has been widely studied since it was first introduced by Jordan, Farhi, and Shor (JFS) in the context of adiabatic quantum computing. Here, we extend the original result to general Markovian environments, not necessarily in Lindblad form. We show that the main conclusion of the original JFS study holds under these general circumstances: Assuming a physically reasonable bath model, it is possible to suppress the initial decay out of the encoded ground state with an energy penalty strength that grows only logarithmically in the system size, at a fixed temperature.

  13. Fuzzy risk analysis of a modern γ-ray industrial irradiator.

    PubMed

    Castiglia, F; Giardina, M

    2011-06-01

    Fuzzy fault tree analyses were used to investigate accident scenarios that involve radiological exposure to operators working in industrial γ-ray irradiation facilities. The HEART method, a first generation human reliability analysis method, was used to evaluate the probability of adverse human error in these analyses. This technique was modified on the basis of fuzzy set theory to more directly take into account the uncertainties in the error-promoting factors on which the methodology is based. Moreover, with regard to some identified accident scenarios, fuzzy radiological exposure risk, expressed in terms of potential annual death, was evaluated. The calculated fuzzy risks for the examined plant were determined to be well below the reference risk suggested by International Commission on Radiological Protection.

  14. A Theoretical Analysis: Physical Unclonable Functions and The Software Protection Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nithyanand, Rishab; Solis, John H.

    2011-09-01

    Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure (within reasonable error bounds) but hard to clone. This property of unclonability is due to the accepted hardness of replicating the multitude of uncontrollable manufacturing characteristics and makes PUFs useful in solving problems such as device authentication, software protection, licensing, and certified execution. In this paper, we focus on the effectiveness of PUFs for software protection and show that traditional non-computational (black-box) PUFs cannot solve the problem against real world adversaries in offline settings. Our contributionsmore » are the following: We provide two real world adversary models (weak and strong variants) and present definitions for security against the adversaries. We continue by proposing schemes secure against the weak adversary and show that no scheme is secure against a strong adversary without the use of trusted hardware. Finally, we present a protection scheme secure against strong adversaries based on trusted hardware.« less

  15. PROTECTIVE VALUE OF TYPHOID VACCINATION AS SHOWN BY THE EXPERIENCE OF THE AMERICAN TROOPS IN THE WAR

    PubMed Central

    Soper, George A.

    1920-01-01

    It has recently been publicly argued that the Army reports demonstrate the failure of inoculation against typhoid fever. Major Soper distinctly contradicts this claim, states plainly how efficient it really was and explains some so-called failures. These were due to infection before vaccination, errors of haste in insufficient dosage or wrong counting, or worn immunity. PMID:18010282

  16. Comparison of base flows to selected streamflow statistics representative of 1930-2002 in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2012-01-01

    Base flows were compared with published streamflow statistics to assess climate variability and to determine the published statistics that can be substituted for annual and seasonal base flows of unregulated streams in West Virginia. The comparison study was done by the U.S. Geological Survey, in cooperation with the West Virginia Department of Environmental Protection, Division of Water and Waste Management. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Differences in mean annual base flows for five record sub-periods (1930-42, 1943-62, 1963-69, 1970-79, and 1980-2002) range from -14.9 to 14.6 percent when compared to the values for the period 1930-2002. Differences between mean seasonal base flows and values for the period 1930-2002 are less variable for winter and spring, -11.2 to 11.0 percent, than for summer and fall, -47.0 to 43.6 percent. Mean summer base flows (July-September) and mean monthly base flows for July, August, September, and October are approximately equal, within 7.4 percentage points of mean annual base flow. The mean of each of annual, spring, summer, fall, and winter base flows are approximately equal to the annual 50-percent (standard error of 10.3 percent), 45-percent (error of 14.6 percent), 75-percent (error of 11.8 percent), 55-percent (error of 11.2 percent), and 35-percent duration flows (error of 11.1 percent), respectively. The mean seasonal base flows for spring, summer, fall, and winter are approximately equal to the spring 50- to 55-percent (standard error of 6.8 percent), summer 45- to 50-percent (error of 6.7 percent), fall 45-percent (error of 15.2 percent), and winter 60-percent duration flows (error of 8.5 percent), respectively. Annual and seasonal base flows representative of the period 1930-2002 at unregulated streamflow-gaging stations and ungaged locations in West Virginia can be estimated using previously published values of statistics and procedures.

  17. A comparison of endoscopic localization error rate between operating surgeons and referring endoscopists in colorectal cancer.

    PubMed

    Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A

    2017-03-01

    Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.

  18. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  19. Scalable effective-temperature reduction for quantum annealers via nested quantum annealing correction

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2018-02-01

    Nested quantum annealing correction (NQAC) is an error-correcting scheme for quantum annealing that allows for the encoding of a logical qubit into an arbitrarily large number of physical qubits. The encoding replaces each logical qubit by a complete graph of degree C . The nesting level C represents the distance of the error-correcting code and controls the amount of protection against thermal and control errors. Theoretical mean-field analyses and empirical data obtained with a D-Wave Two quantum annealer (supporting up to 512 qubits) showed that NQAC has the potential to achieve a scalable effective-temperature reduction, Teff˜C-η , with 0 <η ≤2 . We confirm that this scaling is preserved when NQAC is tested on a D-Wave 2000Q device (supporting up to 2048 qubits). In addition, we show that NQAC can also be used in sampling problems to lower the effective-temperature of a quantum annealer. Such effective-temperature reduction is relevant for machine-learning applications. Since we demonstrate that NQAC achieves error correction via a reduction of the effective-temperature of the quantum annealing device, our results address the problem of the "temperature scaling law for quantum annealers," which requires the temperature of quantum annealers to be reduced as problems of larger sizes are attempted to be solved.

  20. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    PubMed

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  1. Using a Commercial Ethernet PHY Device in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Parks, Jeremy; Arani, Michael; Arroyo, Roberto

    2014-01-01

    This work involved placing a commercial Ethernet PHY on its own power boundary, with limited current supply, and providing detection methods to determine when the device is not operating and when it needs either a reset or power-cycle. The device must be radiation-tested and free of destructive latchup errors. The commercial Ethernet PHY's own power boundary must be supplied by a current-limited power regulator that must have an enable (for power cycling), and its maximum power output must not exceed the PHY's input requirements, thus preventing damage to the device. A regulator with configurable output limits and short-circuit protection (such as the RHFL4913, rad hard positive voltage regulator family) is ideal. This will prevent a catastrophic failure due to radiation (such as a short between the commercial device's power and ground) from taking down the board's main power. Logic provided on the board will detect errors in the PHY. An FPGA (field-programmable gate array) with embedded Ethernet MAC (Media Access Control) will work well. The error detection includes monitoring the PHY's interrupt line, and the status of the Ethernet's switched power. When the PHY is determined to be non-functional, the logic device resets the PHY, which will often clear radiation induced errors. If this doesn't work, the logic device power-cycles the FPGA by toggling the regulator's enable input. This should clear almost all radiation induced errors provided the device is not latched up.

  2. Safe prescribing: a titanic challenge

    PubMed Central

    Routledge, Philip A

    2012-01-01

    The challenge to achieve safe prescribing merits the adjective ‘titanic’. The organisational and human errors leading to poor prescribing (e.g. underprescribing, overprescribing, misprescribing or medication errors) have parallels in the organisational and human errors that led to the loss of the Titanic 100 years ago this year. Prescribing can be adversely affected by communication failures, critical conditions, complacency, corner cutting, callowness and a lack of courage of conviction, all of which were also factors leading to the Titanic tragedy. These issues need to be addressed by a commitment to excellence, the final component of the ‘Seven C's’. Optimal prescribing is dependent upon close communication and collaborative working between highly trained health professionals, whose role is to ensure maximum clinical effectiveness, whilst also protecting their patients from avoidable harm. Since humans are prone to error, and the environments in which they work are imperfect, it is not surprising that medication errors are common, occurring more often during the prescribing stage than during dispensing or administration. A commitment to excellence in prescribing includes a continued focus on lifelong learning (including interprofessional learning) in pharmacology and therapeutics. This should be accompanied by improvements in the clinical working environment of prescribers, and the encouragement of a strong safety culture (including reporting of adverse incidents as well as suspected adverse drug reactions whenever appropriate). Finally, members of the clinical team must be prepared to challenge each other, when necessary, to ensure that prescribing combines the highest likelihood of benefit with the lowest potential for harm. PMID:22738396

  3. Assessing the accuracy of the International Classification of Diseases codes to identify abusive head trauma: a feasibility study.

    PubMed

    Berger, Rachel P; Parks, Sharyn; Fromkin, Janet; Rubin, Pamela; Pecora, Peter J

    2015-04-01

    To assess the accuracy of an International Classification of Diseases (ICD) code-based operational case definition for abusive head trauma (AHT). Subjects were children <5 years of age evaluated for AHT by a hospital-based Child Protection Team (CPT) at a tertiary care paediatric hospital with a completely electronic medical record (EMR) system. Subjects were designated as non-AHT traumatic brain injury (TBI) or AHT based on whether the CPT determined that the injuries were due to AHT. The sensitivity and specificity of the ICD-based definition were calculated. There were 223 children evaluated for AHT: 117 AHT and 106 non-AHT TBI. The sensitivity and specificity of the ICD-based operational case definition were 92% (95% CI 85.8 to 96.2) and 96% (95% CI 92.3 to 99.7), respectively. All errors in sensitivity and three of the four specificity errors were due to coder error; one specificity error was a physician error. In a paediatric tertiary care hospital with an EMR system, the accuracy of an ICD-based case definition for AHT was high. Additional studies are needed to assess the accuracy of this definition in all types of hospitals in which children with AHT are cared for. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Medical professional liability insurance and its relation to medical error and healthcare risk management for the practicing physician.

    PubMed

    Abbott, Richard L; Weber, Paul; Kelley, Betsy

    2005-12-01

    To review the history and current issues surrounding medical professional liability insurance and its relationship to medical error and healthcare risk management. Focused literature review and authors' experience. Medical professional liability insurance issues are reviewed in association with the occurrence of medical error and the role of healthcare risk management. The rising frequency and severity of claims and lawsuits incurred by physicians, as well as escalating defense costs, have dramatically increased over the past several years and have resulted in accelerated efforts to reduce medical errors and control practice risk for physicians. Medical error reduction and improved patient outcomes are closely linked to the goals of the medical risk manager by reducing exposure to adverse medical events. Management of professional liability risk by the physician-led malpractice insurance company not only protects the economic viability of physicians, but also addresses patient safety concerns. Physician-owned malpractice liability insurance companies will continue to be the dominant providers of insurance for practicing physicians and will serve as the primary source for loss prevention and risk management services. To succeed in the marketplace, the emergence and importance of the risk manager and incorporation of risk management principles throughout the professional liability company has become crucial to the financial stability and success of the insurance company. The risk manager provides the necessary advice and support requested by physicians to minimize medical liability risk in their daily practice.

  5. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  6. Interruptions in emergency medicine: things are not always what they seem.

    PubMed

    Walter, Scott R

    2018-06-20

    We have all felt the cognitive disjuncture of being interrupted during an important task. Most ED physicians will readily proffer the high frequency and/or burden of interruptions during their work, and of the many observational studies of interruptions in healthcare EDs do indeed have high interruption rates[2]. In experimental psychology, where many of these ideas originated, there is plenty of evidence that interruptions negatively affect performance. Interruptions have been associated with reduced performance on complex tasks[3,4], increased sequence errors[5], increased task completion time and augmented annoyance and anxiety[6]. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. Patient and nurse safety: how information technology makes a difference.

    PubMed

    Simpson, Roy L

    2005-01-01

    The Institute of Medicine's landmark report asserted medical error is seldom the fault of individuals, but the result of faulty healthcare policy/procedure systems. Numerous studies have shown that information technology can shore up weak systems. For nursing, information technology plays a key role in protecting patients by eliminating nursing mistakes and protecting nurses by reducing their negative exposure. However, managing information technology is a function of managing the people who use it. This article examines critical issues that impact patient and nurse safety, both physical and professional. It discusses the importance of eliminating the culture of blame, the requirements of process change, how to implement technology in harmony with the organization and the significance of vision.

  8. Legal issues of the electronic dental record: security and confidentiality.

    PubMed

    Szekely, D G; Milam, S; Khademi, J A

    1996-01-01

    Computer-based, electronic dental record keeping involves complex issues of patient privacy and the dental practitioner's ethical duty of confidentiality. Federal and state law is responding to the new legal issues presented by computer technology. Authenticating the electronic record in terms of ensuring its reliability and accuracy is essential in order to protect its admissibility as evidence in legal actions. Security systems must be carefully planned to limit access and provide for back-up and storage of dental records. Carefully planned security systems protect the patient from disclosure without the patient's consent and also protect the practitioner from the liability that would arise from such disclosure. Human errors account for the majority of data security problems. Personnel security is assured through pre-employment screening, employment contracts, policies, and staff education. Contracts for health information systems should include provisions for indemnification and ensure the confidentiality of the system by the vendor.

  9. A review of environmental risk factors for myopia during early life, childhood and adolescence.

    PubMed

    Ramamurthy, Dharani; Lin Chua, Sharon Yu; Saw, Seang-Mei

    2015-11-01

    Myopia is a significant public health problem worldwide, particularly in East Asian countries. The increasing prevalence of myopia poses a huge socio-economic burden and progressive high myopia can lead to sight-threatening ocular complications. Hence, the prevention of early-onset myopia progressing to pathological high myopia is important. Recent epidemiological studies suggest that increased outdoor time is an important modifiable environmental factor that protects young children from myopia. This protective effect may be due to high light intensity outdoors, the chromaticity of daylight or increased vitamin D levels. This review summarises the possible underlying biological mechanisms for the protective association between time outdoors and myopia, including the potential role of nicotinic acetylcholine receptors in refractive error development. Recent evidence for the role of other environmental risk factors such as near work, birth seasons, parental smoking and birth order are also summarised. © 2015 Optometry Australia.

  10. Benefit of adaptive FEC in shared backup path protected elastic optical network.

    PubMed

    Guo, Hong; Dai, Hua; Wang, Chao; Li, Yongcheng; Bose, Sanjay K; Shen, Gangxiang

    2015-07-27

    We apply an adaptive forward error correction (FEC) allocation strategy to an Elastic Optical Network (EON) operated with shared backup path protection (SBPP). To maximize the protected network capacity that can be carried, an Integer Linear Programing (ILP) model and a spectrum window plane (SWP)-based heuristic algorithm are developed. Simulation results show that the FEC coding overhead required by the adaptive FEC scheme is significantly lower than that needed by a fixed FEC allocation strategy resulting in higher network capacity for the adaptive strategy. The adaptive FEC allocation strategy can also significantly outperform the fixed FEC allocation strategy both in terms of the spare capacity redundancy and the average FEC coding overhead needed per optical channel. The proposed heuristic algorithm is efficient and not only performs closer to the ILP model but also does much better than the shortest-path algorithm.

  11. Comparison of usual and alternative methods to measure height in mechanically ventilated patients: potential impact on protective ventilation.

    PubMed

    Bojmehrani, Azadeh; Bergeron-Duchesne, Maude; Bouchard, Carmelle; Simard, Serge; Bouchard, Pierre-Alexandre; Vanderschuren, Abel; L'Her, Erwan; Lellouche, François

    2014-07-01

    Protective ventilation implementation requires the calculation of predicted body weight (PBW), determined by a formula based on gender and height. Consequently, height inaccuracy may be a limiting factor to correctly set tidal volumes. The objective of this study was to evaluate the accuracy of different methods in measuring heights in mechanically ventilated patients. Before cardiac surgery, actual height was measured with a height gauge while subjects were standing upright (reference method); the height was also estimated by alternative methods based on lower leg and forearm measurements. After cardiac surgery, upon ICU admission, a subject's height was visually estimated by a clinician and then measured with a tape measure while the subject was supine and undergoing mechanical ventilation. One hundred subjects (75 men, 25 women) were prospectively included. Mean PBW was 61.0 ± 9.7 kg, and mean actual weight was 30.3% higher. In comparison with the reference method, estimating the height visually and using the tape measure were less accurate than both lower leg and forearm measurements. Errors above 10% in calculating the PBW were present in 25 and 40 subjects when the tape measure or visual estimation of height was used in the formula, respectively. With lower leg and forearm measurements, 15 subjects had errors above 10% (P < .001). Our results demonstrate that significant variability exists between the different methods used to measure height in bedridden patients on mechanical ventilation. Alternative methods based on lower leg and forearm measurements are potentially interesting solutions to facilitate the accurate application of protective ventilation. Copyright © 2014 by Daedalus Enterprises.

  12. Protective effects of prescription n-3 fatty acids against impairment of spatial cognitive learning ability in amyloid β-infused rats.

    PubMed

    Hashimoto, Michio; Tozawa, Ryuichi; Katakura, Masanori; Shahdat, Hossain; Haque, Abdul Md; Tanabe, Yoko; Gamoh, Shuji; Shido, Osamu

    2011-07-01

    Deposition of amyloid β peptide (Aβ) into the brain causes cognitive impairment. We investigated whether prescription pre-administration of n-3 fatty acids improves cognitive learning ability in young rats and whether it protects against learning ability impairments in an animal model of Alzheimer's disease that was prepared by infusion of Aβ(1-40) into the cerebral ventricles of rats. Pre-administration of TAK-085 (highly purified and concentrated n-3 fatty acids containing eicosapentaenoic acid ethyl ester and docosahexaenoic acid ethyl ester) at 300 mg kg(-1) day(-1) for 12 weeks significantly reduced the number of reference memory errors in an 8-arm radial maze, suggesting that long-term administration of TAK-085 improves cognitive leaning ability in rats. After pre-administration, the control group was divided into the vehicle and Aβ-infused groups, whereas the TAK-085 pre-administration group was divided into the TAK-085 and TAK-085 + Aβ groups (TAK-085-pre-administered Aβ-infused rats). Aβ(1-40) or vehicle was infused into the cerebral ventricle using a mini osmotic pump. Pre-administration of TAK-085 to the Aβ-infused rats significantly suppressed the number of reference and working memory errors and decreased the levels of lipid peroxide and reactive oxygen species in the cerebral cortex and hippocampus of Aβ-infused rats, suggesting that TAK-085 increases antioxidative defenses. The present study suggests that long-term administration of TAK-085 is a possible therapeutic agent for protecting against Alzheimer's disease-induced learning deficiencies. This journal is © The Royal Society of Chemistry 2011

  13. Burnout syndrome among non-consultant hospital doctors in Ireland: relationship with self-reported patient care.

    PubMed

    Sulaiman, Che Fatehah Che; Henn, Patrick; Smith, Simon; O'Tuathaigh, Colm M P

    2017-10-01

    Intensive workload and limited training opportunities for Irish non-consultant hospital doctors (NCHDs) has a negative effect on their health and well-being, and can result in burnout. Burnout affects physician performance and can lead to medical errors. This study examined the prevalence of burnout syndrome among Irish NCHDs and its association with self-reported medical error and poor quality of patient care. A cross-sectional quantitative survey-based design. All teaching hospitals affiliated with University College Cork. NCHDs of all grades and specialties. The following instruments were completed by all participants: Maslach Burnout Inventory-Human Service Survey (MBI-HSS), assessing three categories of burnout syndrome: Emotional exhaustion (EE), Personal Achievement (PA) and Depersonalization (DP); questions related to self-reported medical errors/poor patient care quality and socio-demographic information. Self-reported measures of burnout and poor quality of patient care. Prevalence of burnout among physicians (n = 265) was 26.4%. There was a significant gender difference for EE and DP, but none for PA. A positive weak correlation was observed between EE and DP with medical error or poor patient care. A negative association was reported between PA and medical error and reduced quality of patient care. Burnout is prevalent among NCHDs in Ireland. Burnout syndrome is associated with self-reported medical error and quality of care in this sample population. Measures need to be taken to address this issue, with a view to protecting health of NCHDs and maintaining quality of patient care. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  14. Exponential error reduction in pretransfusion testing with automation.

    PubMed

    South, Susan F; Casina, Tony S; Li, Lily

    2012-08-01

    Protecting the safety of blood transfusion is the top priority of transfusion service laboratories. Pretransfusion testing is a critical element of the entire transfusion process to enhance vein-to-vein safety. Human error associated with manual pretransfusion testing is a cause of transfusion-related mortality and morbidity and most human errors can be eliminated by automated systems. However, the uptake of automation in transfusion services has been slow and many transfusion service laboratories around the world still use manual blood group and antibody screen (G&S) methods. The goal of this study was to compare error potentials of commonly used manual (e.g., tiles and tubes) versus automated (e.g., ID-GelStation and AutoVue Innova) G&S methods. Routine G&S processes in seven transfusion service laboratories (four with manual and three with automated G&S methods) were analyzed using failure modes and effects analysis to evaluate the corresponding error potentials of each method. Manual methods contained a higher number of process steps ranging from 22 to 39, while automated G&S methods only contained six to eight steps. Corresponding to the number of the process steps that required human interactions, the risk priority number (RPN) of the manual methods ranged from 5304 to 10,976. In contrast, the RPN of the automated methods was between 129 and 436 and also demonstrated a 90% to 98% reduction of the defect opportunities in routine G&S testing. This study provided quantitative evidence on how automation could transform pretransfusion testing processes by dramatically reducing error potentials and thus would improve the safety of blood transfusion. © 2012 American Association of Blood Banks.

  15. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    PubMed Central

    Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco

    2014-01-01

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

  16. A method for data‐driven exploration to pinpoint key features in medical data and facilitate expert review

    PubMed Central

    Juhlin, Kristina; Norén, G. Niklas

    2017-01-01

    Abstract Purpose To develop a method for data‐driven exploration in pharmacovigilance and illustrate its use by identifying the key features of individual case safety reports related to medication errors. Methods We propose vigiPoint, a method that contrasts the relative frequency of covariate values in a data subset of interest to those within one or more comparators, utilizing odds ratios with adaptive statistical shrinkage. Nested analyses identify higher order patterns, and permutation analysis is employed to protect against chance findings. For illustration, a total of 164 000 adverse event reports related to medication errors were characterized and contrasted to the other 7 833 000 reports in VigiBase, the WHO global database of individual case safety reports, as of May 2013. The initial scope included 2000 features, such as patient age groups, reporter qualifications, and countries of origin. Results vigiPoint highlighted 109 key features of medication error reports. The most prominent were that the vast majority of medication error reports were from the United States (89% compared with 49% for other reports in VigiBase); that the majority of reports were sent by consumers (53% vs 17% for other reports); that pharmacists (12% vs 5.3%) and lawyers (2.9% vs 1.5%) were overrepresented; and that there were more medication error reports than expected for patients aged 2‐11 years (10% vs 5.7%), particularly in Germany (16%). Conclusions vigiPoint effectively identified key features of medication error reports in VigiBase. More generally, it reduces lead times for analysis and ensures reproducibility and transparency. An important next step is to evaluate its use in other data. PMID:28815800

  17. Experimental Demonstration of Fault-Tolerant State Preparation with Superconducting Qubits.

    PubMed

    Takita, Maika; Cross, Andrew W; Córcoles, A D; Chow, Jerry M; Gambetta, Jay M

    2017-11-03

    Robust quantum computation requires encoding delicate quantum information into degrees of freedom that are hard for the environment to change. Quantum encodings have been demonstrated in many physical systems by observing and correcting storage errors, but applications require not just storing information; we must accurately compute even with faulty operations. The theory of fault-tolerant quantum computing illuminates a way forward by providing a foundation and collection of techniques for limiting the spread of errors. Here we implement one of the smallest quantum codes in a five-qubit superconducting transmon device and demonstrate fault-tolerant state preparation. We characterize the resulting code words through quantum process tomography and study the free evolution of the logical observables. Our results are consistent with fault-tolerant state preparation in a protected qubit subspace.

  18. Synthesis of Arbitrary Quantum Circuits to Topological Assembly: Systematic, Online and Compact.

    PubMed

    Paler, Alexandru; Fowler, Austin G; Wille, Robert

    2017-09-05

    It is challenging to transform an arbitrary quantum circuit into a form protected by surface code quantum error correcting codes (a variant of topological quantum error correction), especially if the goal is to minimise overhead. One of the issues is the efficient placement of magic state distillation sub circuits, so-called distillation boxes, in the space-time volume that abstracts the computation's required resources. This work presents a general, systematic, online method for the synthesis of such circuits. Distillation box placement is controlled by so-called schedulers. The work introduces a greedy scheduler generating compact box placements. The implemented software, whose source code is available at www.github.com/alexandrupaler/tqec, is used to illustrate and discuss synthesis examples. Synthesis and optimisation improvements are proposed.

  19. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  20. Schrodinger's catapult II: entanglement between stationary and flying fields

    NASA Astrophysics Data System (ADS)

    Pfaff, W.; Axline, C.; Burkhart, L.; Vool, U.; Reinhold, P.; Frunzio, L.; Jiang, L.; Devoret, M.; Schoelkopf, R.

    Entanglement between nodes is an elementary resource in a quantum network. An important step towards its realization is entanglement between stationary and flying states. Here we experimentally demonstrate entanglement generation between a long-lived cavity memory and traveling mode in circuit QED. A large on/off ratio and fast control over a parametric mixing process allow us to realize conversion with tunable magnitude and duration between standing and flying mode. In the case of half-conversion, we observe correlations between the standing and flying state that confirm the generation of entangled states. We show this for both single-photon and multi-photon states, paving the way for error-correctable remote entanglement. Our system could serve as an essential component in a modular architecture for error-protected quantum information processing.

  1. Spectrum Orbit Utilization Program Documentation: SOUP5 Version 3.8 User's Manual, Volume 2, Appendices a Through G

    NASA Technical Reports Server (NTRS)

    Davidson, J.; Ottey, H. R.; Sawitz, P.; Zusman, F. S.

    1985-01-01

    The appendixes of the user manual are presented. Input forms which may be used to prepare data for the SOUP5V3.4 of the R2BCSAT-83 data base are given. The IBM job control language which can be used to run the SOUP5 system from a magnetic tape is described. Copies of a run using the delivered tape and IBM OS/MVS Job Control Language card deck are illustrated. Numerical limits on scenario data requests are listed. Error handling, error messages and editing procedures are also listed. Instructions as to how to enter a protection ratio template are given. And relation between PARC prameter, channelization, channel families, and interference categories are also listed.

  2. Performance on a strategy set shifting task in rats following adult or adolescent cocaine exposure

    PubMed Central

    Kantak, Kathleen M.; Barlow, Nicole; Tassin, David H.; Brisotti, Madeline F.; Jordan, Chloe J

    2014-01-01

    Rationale Neuropsychological testing is widespread in adult cocaine abusers, but lacking in teens. Animal models may provide insight into age-related neuropsychological consequences of cocaine exposure. Objectives Determine whether developmental plasticity protects or hinders behavioral flexibility after cocaine exposure in adolescent vs. adult rats. Methods Using a yoked-triad design, one rat controlled cocaine delivery and the other two passively received cocaine or saline. Rats controlling cocaine delivery (1.0 mg/kg) self-administered for 18 sessions (starting P37 or P77), followed by 18 drug-free days. Rats next were tested in a strategy set shifting task, lasting 11–13 sessions. Results Cocaine self-administration did not differ between age groups. During initial set formation, adolescent-onset groups required more trials to reach criterion and made more errors than adult-onset groups. During the set shift phase, rats with adult-onset cocaine self-administration experience had higher proportions of correct trials and fewer perseverative + regressive errors than age-matched yoked-controls or rats with adolescent-onset cocaine self-administration experience. During reversal learning, rats with adult-onset cocaine experience (self-administered or passive) required fewer trials to reach criterion and the self-administering rats made fewer perseverative + regressive errors than yoked-saline rats. Rats receiving adolescent-onset yoked-cocaine had more trial omissions and longer lever press reaction times than age-matched rats self-administering cocaine or receiving yoked-saline. Conclusions Prior cocaine self-administration may impair memory to reduce proactive interference during set shifting and reversal learning in adult-onset but not adolescent-onset rats (developmental plasticity protective). Passive cocaine may disrupt aspects of executive function in adolescent-onset but not adult-onset rats (developmental plasticity hinders). PMID:24800898

  3. The 'Soil Cover App' - a new tool for fast determination of dead and living biomass on soil

    NASA Astrophysics Data System (ADS)

    Bauer, Thomas; Strauss, Peter; Riegler-Nurscher, Peter; Prankl, Johann; Prankl, Heinrich

    2017-04-01

    Worldwide many agricultural practices aim on soil protection strategies using living or dead biomass as soil cover. Especially for the case when management practices are focusing on soil erosion mitigation the effectiveness of these practices is directly driven by the amount of soil coverleft on the soil surface. Hence there is a need for quick and reliable methods of soil cover estimation not only for living biomass but particularly for dead biomass (mulch). Available methods for the soil cover measurement are either subjective, depending on an educated guess or time consuming, e.g., if the image is analysed manually at grid points. We therefore developed a mobile application using an algorithm based on entangled forest classification. The final output of the algorithm gives classified labels for each pixel of the input image as well as the percentage of each class which are living biomass, dead biomass, stones and soil. Our training dataset consisted of more than 250 different images and their annotated class information. Images have been taken in a set of different environmental conditions such as light, soil coverages from between 0% to 100%, different materials such as living plants, residues, straw material and stones. We compared the results provided by our mobile application with a data set of 180 images that had been manually annotated A comparison between both methods revealed a regression slope of 0.964 with a coefficient of determination R2 = 0.92, corresponding to an average error of about 4%. While average error of living plant classification was about 3%, dead residue classification resulted in an 8% error. Thus the new mobile application tool offers a fast and easy way to obtain information on the protective potential of a particular agricultural management site.

  4. PROTECTED-UK - Clinical pharmacist interventions in the UK critical care unit: exploration of relationship between intervention, service characteristics and experience level.

    PubMed

    Rudall, Nicola; McKenzie, Catherine; Landa, June; Bourne, Richard S; Bates, Ian; Shulman, Rob

    2017-08-01

    Clinical pharmacist (CP) interventions from the PROTECTED-UK cohort, a multi-site critical care interventions study, were further analysed to assess effects of: time on critical care, number of interventions, CP expertise and days of week, on impact of intervention and ultimately contribution to patient care. Intervention data were collected from 21 adult critical care units over 14 days. Interventions could be error, optimisation or consults, and were blind-coded to ensure consistency, prior to bivariate analysis. Pharmacy service demographics were further collated by investigator survey. Of the 20 758 prescriptions reviewed, 3375 interventions were made (intervention rate 16.1%). CPs spent 3.5 h per day (mean, ±SD 1.7) on direct patient care, reviewed 10.3 patients per day (±SD 4.2) and required 22.5 min (±SD 9.5) per review. Intervention rate had a moderate inverse correlation with the time the pharmacist spent on critical care (P = 0.05; r = 0.4). Optimisation rate had a strong inverse association with total number of prescriptions reviewed per day (P = 0.001; r = 0.7). A consultant CP had a moderate inverse correlation with number of errors identified (P = 0.008; r = 0.6). No correlation existed between the presence of electronic prescribing in critical care and any intervention rate. Few centres provided weekend services, although the intervention rate was significantly higher on weekends than weekdays. A CP is essential for safe and optimised patient medication therapy; an extended and developed pharmacy service is expected to reduce errors. CP services should be adequately staffed to enable adequate time for prescription review and maximal therapy optimisation. © 2016 Royal Pharmaceutical Society.

  5. "Cheating under pressure: A self-protection model of workplace cheating behavior": Correction to Mitchel et al. (2017).

    PubMed

    2018-01-01

    Reports an error in "Cheating Under Pressure: A Self-Protection Model of Workplace Cheating Behavior" by Marie S. Mitchell, Michael D. Baer, Maureen L. Ambrose, Robert Folger and Noel F. Palmer ( Journal of Applied Psychology , Advanced Online Publication, Aug 14, 2017, np). In the article, the fit statistics in Study 3 were reported in error. The fit of the measurement model is: Χ²(362) = 563.66, p = .001; CFI = .94; SRMR = .05; RMSEA = .04. The fit of the SEM model is: Χ²(362) = 563.66, p = .001; CFI = .94; SRMR = .05; RMSEA = .04. (The following abstract of the original article appeared in record 2017-34937-001.) Workplace cheating behavior is unethical behavior that seeks to create an unfair advantage and enhance benefits for the actor. Although cheating is clearly unwanted behavior within organizations, organizations may unknowingly increase cheating as a byproduct of their pursuit of high performance. We theorize that as organizations place a strong emphasis on high levels of performance, they may also enhance employees' self-interested motives and need for self-protection. We suggest that demands for high performance may elicit performance pressure-the subjective experience that employees must raise their performance efforts or face significant consequences. Employees' perception of the need to raise performance paired with the potential for negative consequences is threatening and heightens self-protection needs. Driven by self-protection, employees experience anger and heightened self-serving cognitions, which motivate cheating behavior. A multistudy approach was used to test our predictions. Study 1 developed and provided validity evidence for a measure of cheating behavior. Studies 2 and 3 tested our predictions in time-separated field studies. Results from Study 2 demonstrated that anger mediates the effects of performance pressure on cheating behavior. Study 3 replicated the Study 2 findings, and extended them to show that self-serving cognitions also mediate the effects of performance pressure on cheating behavior. Implications of our findings for theory and practice are provided. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Polarization entanglement purification for concatenated Greenberger-Horne-Zeilinger state

    NASA Astrophysics Data System (ADS)

    Zhou, Lan; Sheng, Yu-Bo

    2017-10-01

    Entanglement purification plays a fundamental role in long-distance quantum communication. In the paper, we put forward the first polarization entanglement purification protocol (EPP) for one type of nonlocal logic-qubit entanglement, i.e., concatenated Greenberger-Horne-Zeilinger (C-GHZ) state, resorting to the photon-atom interaction in low-quality (Q) cavity. In contrast to existing EPPs, this protocol can purify the bit-flip error and phase-flip error in both physic and logic level. Instead of measuring the photons directly, this protocol only requires to measure the atom states to judge whether the protocol is successful. In this way, the purified logic entangled states can be preserved for further application. Moreover, it makes this EPP repeatable so as to obtain a higher fidelity of logic entangled states. As the logic-qubit entanglement utilizes the quantum error correction (QEC) codes, which has an inherent stability against noise and decoherence, this EPP combined with the QEC codes may provide a double protection for the entanglement from the channel noise and may have potential applications in long-distance quantum communication.

  7. A Novel Hybrid Error Criterion-Based Active Control Method for on-Line Milling Vibration Suppression with Piezoelectric Actuators and Sensors

    PubMed Central

    Zhang, Xingwu; Wang, Chenxi; Gao, Robert X.; Yan, Ruqiang; Chen, Xuefeng; Wang, Shibin

    2016-01-01

    Milling vibration is one of the most serious factors affecting machining quality and precision. In this paper a novel hybrid error criterion-based frequency-domain LMS active control method is constructed and used for vibration suppression of milling processes by piezoelectric actuators and sensors, in which only one Fast Fourier Transform (FFT) is used and no Inverse Fast Fourier Transform (IFFT) is involved. The correction formulas are derived by a steepest descent procedure and the control parameters are analyzed and optimized. Then, a novel hybrid error criterion is constructed to improve the adaptability, reliability and anti-interference ability of the constructed control algorithm. Finally, based on piezoelectric actuators and acceleration sensors, a simulation of a spindle and a milling process experiment are presented to verify the proposed method. Besides, a protection program is added in the control flow to enhance the reliability of the control method in applications. The simulation and experiment results indicate that the proposed method is an effective and reliable way for on-line vibration suppression, and the machining quality can be obviously improved. PMID:26751448

  8. Time series forecasting of future claims amount of SOCSO's employment injury scheme (EIS)

    NASA Astrophysics Data System (ADS)

    Zulkifli, Faiz; Ismail, Isma Liana; Chek, Mohd Zaki Awang; Jamal, Nur Faezah; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md; Noor, Syamsul Ikram Mohd; Ahmad, Abu Bakar

    2012-09-01

    The Employment Injury Scheme (EIS) provides protection to employees who are injured due to accidents whilst working, commuting from home to the work place or during employee takes a break during an authorized recess time or while travelling that is related with his work. The main purpose of this study is to forecast value on claims amount of EIS for the year 2011 until 2015 by using appropriate models. These models were tested on the actual EIS data from year 1972 until year 2010. Three different forecasting models are chosen for comparisons. These are the Naïve with Trend Model, Average Percent Change Model and Double Exponential Smoothing Model. The best model is selected based on the smallest value of error measures using the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). From the result, the best model that best fit the forecast for the EIS is the Average Percent Change Model. Furthermore, the result also shows the claims amount of EIS for the year 2011 to year 2015 continue to trend upwards from year 2010.

  9. Error Analysis of CM Data Products Sources of Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, Brian D.; Eckert-Gallup, Aubrey Celia; Cochran, Lainy Dromgoole

    This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across amore » wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.« less

  10. Optics measurement algorithms and error analysis for the proton energy frontier

    NASA Astrophysics Data System (ADS)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  11. Assessment of measurement errors and dynamic calibration methods for three different tipping bucket rain gauges

    NASA Astrophysics Data System (ADS)

    Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.

    2016-09-01

    Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major implications to field- and watershed-scale hydrologic studies.

  12. International Conference on Environmental Ergonomics (4th) Held in Austin, Texas on 1-5 October 1990

    DTIC Science & Technology

    1991-01-15

    measurement error in relation 94 Ducharme, Michel B. to clothing and tissue insulation -- a simplified view 5 Reischl, Uwe Breathability measurements...PROTECTIVE CLOTHING Uwe Reischl, Francis N. Dukes-Dobos, Thomas E. Bernard and Kai Buller Department of Environmental and Occupational Health 0 College of...understood. 117 BREATHING APPARATUS AND VENTILATION William P. Morgan Biodynamics Laboratory University of Wisconsin- Madison 0 Madison , Wisconsin USA

  13. Engineering Guide for Fire Protection and Detection Systems at Army Ammunition Plants. Volume 1. Selection and Design

    DTIC Science & Technology

    1980-12-01

    type of personnel likely to he using them, (3) the physical environment , (4) health and operational safety considerations. Carefully selected portable...operated apparatus must have the battery and energy-limiting components located outside the hazardous environment , and be so constructed that a direct...designate effect on equipment or personnel), based upon the most severe result of personnel error, procedural deficiencies, environment , design

  14. Optimal diabatic dynamics of Majorana-based quantum gates

    NASA Astrophysics Data System (ADS)

    Rahmani, Armin; Seradjeh, Babak; Franz, Marcel

    2017-08-01

    In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles, such as Majorana zero modes, and are protected from local environmental perturbations. In the adiabatic regime, with timescales set by the inverse gap of the system, the errors can be made arbitrarily small by performing the process more slowly. To enhance the performance of quantum information processing with Majorana zero modes, we apply the theory of optimal control to the diabatic dynamics of Majorana-based qubits. While we sacrifice complete topological protection, we impose constraints on the optimal protocol to take advantage of the nonlocal nature of topological information and increase the robustness of our gates. By using the Pontryagin's maximum principle, we show that robust equivalent gates to perfect adiabatic braiding can be implemented in finite times through optimal pulses. In our implementation, modifications to the device Hamiltonian are avoided. Focusing on thermally isolated systems, we study the effects of calibration errors and external white and 1 /f (pink) noise on Majorana-based gates. While a noise-induced antiadiabatic behavior, where a slower process creates more diabatic excitations, prohibits indefinite enhancement of the robustness of the adiabatic scheme, our fast optimal protocols exhibit remarkable stability to noise and have the potential to significantly enhance the practical performance of Majorana-based information processing.

  15. Theoretical model for design and analysis of protectional eyewear.

    PubMed

    Zelzer, B; Speck, A; Langenbucher, A; Eppig, T

    2013-05-01

    Protectional eyewear has to fulfill both mechanical and optical stress tests. To pass those optical tests the surfaces of safety spectacles have to be optimized to minimize optical aberrations. Starting with the surface data of three measured safety spectacles, a theoretical spectacle model (four spherical surfaces) is recalculated first and then optimized while keeping the front surface unchanged. Next to spherical power, astigmatic power and prism imbalance we used the wavefront error (five different viewing directions) to simulate the optical performance and to optimize the safety spectacle geometries. All surfaces were spherical (maximum global deviation 'peak-to-valley' between the measured surface and the best-fit sphere: 0.132mm). Except the spherical power of the model Axcont (-0.07m(-1)) all simulated optical performance before optimization was better than the limits defined by standards. The optimization reduced the wavefront error by 1% to 0.150 λ (Windor/Infield), by 63% to 0.194 λ (Axcont/Bolle) and by 55% to 0.199 λ (2720/3M) without dropping below the measured thickness. The simulated optical performance of spectacle designs could be improved when using a smart optimization. A good optical design counteracts degradation by parameter variation throughout the manufacturing process. Copyright © 2013. Published by Elsevier GmbH.

  16. Measuring in-use ship emissions with international and U.S. federal methods.

    PubMed

    Khan, M Yusuf; Ranganathan, Sindhuja; Agrawal, Harshit; Welch, William A; Laroo, Christopher; Miller, J Wayne; Cocker, David R

    2013-03-01

    Regulatory agencies have shifted their emphasis from measuring emissions during certification cycles to measuring emissions during actual use. Emission measurements in this research were made from two different large ships at sea to compare the Simplified Measurement Method (SMM) compliant with the International Maritime Organization (IMO) NOx Technical Code to the Portable Emission Measurement Systems (PEMS) compliant with the US. Environmental Protection Agency (EPA) 40 Code of Federal Regulations (CFR) Part 1065 for on-road emission testing. Emissions of nitrogen oxides (NOx), carbon dioxide (CO2), and carbon monoxide (CO) were measured at load points specified by the International Organization for Standardization (ISO) to compare the two measurement methods. The average percentage errors calculated for PEMS measurements were 6.5%, 0.6%, and 357% for NOx, CO2, and CO, respectively. The NOx percentage error of 6.5% corresponds to a 0.22 to 1.11 g/kW-hr error in moving from Tier III (3.4 g/kW-hr) to Tier I (17.0 g/kW-hr) emission limits. Emission factors (EFs) of NOx and CO2 measured via SMM were comparable to other studies and regulatory agencies estimates. However EF(PM2.5) for this study was up to 26% higher than that currently used by regulatory agencies. The PM2.5 was comprised predominantly of hydrated sulfate (70-95%), followed by organic carbon (11-14%), ash (6-11%), and elemental carbon (0.4-0.8%). This research provides direct comparison between the International Maritime Organization and U.S. Environmental Protection Agency reference methods for quantifying in-use emissions from ships. This research provides correlations for NOx, CO2, and CO measured by a PEMS unit (certified by U.S. EPA for on-road testing) against IMO's Simplified Measurement Method for on-board certification. It substantiates the measurements of NOx by PEMS and quantifies measurement error. This study also provides in-use modal and overall weighted emission factors of gaseous (NOx, CO, CO2, total hydrocarbons [THC], and SO2) and particulate pollutants from the main engine of a container ship, which are helpful in the development of emission inventory.

  17. Safe prescribing: a titanic challenge.

    PubMed

    Routledge, Philip A

    2012-10-01

    The challenge to achieve safe prescribing merits the adjective 'titanic'. The organisational and human errors leading to poor prescribing (e.g. underprescribing, overprescribing, misprescribing or medication errors) have parallels in the organisational and human errors that led to the loss of the Titanic 100 years ago this year. Prescribing can be adversely affected by communication failures, critical conditions, complacency, corner cutting, callowness and a lack of courage of conviction, all of which were also factors leading to the Titanic tragedy. These issues need to be addressed by a commitment to excellence, the final component of the 'Seven C's'. Optimal prescribing is dependent upon close communication and collaborative working between highly trained health professionals, whose role is to ensure maximum clinical effectiveness, whilst also protecting their patients from avoidable harm. Since humans are prone to error, and the environments in which they work are imperfect, it is not surprising that medication errors are common, occurring more often during the prescribing stage than during dispensing or administration. A commitment to excellence in prescribing includes a continued focus on lifelong learning (including interprofessional learning) in pharmacology and therapeutics. This should be accompanied by improvements in the clinical working environment of prescribers, and the encouragement of a strong safety culture (including reporting of adverse incidents as well as suspected adverse drug reactions whenever appropriate). Finally, members of the clinical team must be prepared to challenge each other, when necessary, to ensure that prescribing combines the highest likelihood of benefit with the lowest potential for harm. © 2012 The Author. British Journal of Clinical Pharmacology © 2012 The British Pharmacological Society.

  18. Protection of HEVC Video Delivery in Vehicular Networks with RaptorQ Codes

    PubMed Central

    Martínez-Rach, Miguel; López, Otoniel; Malumbres, Manuel Pérez

    2014-01-01

    With future vehicles equipped with processing capability, storage, and communications, vehicular networks will become a reality. A vast number of applications will arise that will make use of this connectivity. Some of them will be based on video streaming. In this paper we focus on HEVC video coding standard streaming in vehicular networks and how it deals with packet losses with the aid of RaptorQ, a Forward Error Correction scheme. As vehicular networks are packet loss prone networks, protection mechanisms are necessary if we want to guarantee a minimum level of quality of experience to the final user. We have run simulations to evaluate which configurations fit better in this type of scenarios. PMID:25136675

  19. E-recruitment based clinical research: notes for Research Ethics Committees/Institutional Review Boards.

    PubMed

    Refolo, P; Sacchini, D; Minacori, R; Daloiso, V; Spagnolo, A G

    2015-01-01

    Patient recruitment is a critical point of today's clinical research. Several proposals have been made for improving it, but the effectiveness of these measures is actually uncertain. The use of Internet (e-recruitment) could represent a great chance to improve patient enrolment, even though the effectiveness of this implementation is not so evident. E-recruitment could bring some advantages, such as better interaction between clinical research demand and clinical research supply, time and resources optimization, and reduction of data entry errors. It raises some issues too, such as sampling errors, validity of informed consent, and protection of privacy. Research Ethics Committees/Institutional Review Boards should consider these critical points. The paper deals with Internet recruitment for clinical research. It also attempts to provide Research Ethics Committees/Institutional Review Boards with notes for assessing e-recruitment based clinical protocols.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachman, Daniel; Chen, Zhijiang; Wang, Christopher

    Phase errors caused by fabrication variations in silicon photonic integrated circuits are an important problem, which negatively impacts device yield and performance. This study reports our recent progress in the development of a method for permanent, postfabrication phase error correction of silicon photonic circuits based on femtosecond laser irradiation. Using beam shaping technique, we achieve a 14-fold enhancement in the phase tuning resolution of the method with a Gaussian-shaped beam compared to a top-hat beam. The large improvement in the tuning resolution makes the femtosecond laser method potentially useful for very fine phase trimming of silicon photonic circuits. Finally, wemore » also show that femtosecond laser pulses can directly modify silicon photonic devices through a SiO 2 cladding layer, making it the only permanent post-fabrication method that can tune silicon photonic circuits protected by an oxide cladding.« less

  1. Thermocouple Errors when Mounted on Cylindrical Surfaces in Abnormal Thermal Environments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakos, James T.; Suo-Anttila, Jill M.; Zepper, Ethan T.

    Mineral-insulated, metal-sheathed, Type-K thermocouples are used to measure the temperature of various items in high-temperature environments, often exceeding 1000degC (1273 K). The thermocouple wires (chromel and alumel) are protected from the harsh environments by an Inconel sheath and magnesium oxide (MgO) insulation. The sheath and insulation are required for reliable measurements. Due to the sheath and MgO insulation, the temperature registered by the thermocouple is not the temperature of the surface of interest. In some cases, the error incurred is large enough to be of concern because these data are used for model validation, and thus the uncertainties of themore » data need to be well documented. This report documents the error using 0.062" and 0.040" diameter Inconel sheathed, Type-K thermocouples mounted on cylindrical surfaces (inside of a shroud, outside and inside of a mock test unit). After an initial transient, the thermocouple bias errors typically range only about +-1-2% of the reading in K. After all of the uncertainty sources have been included, the total uncertainty to 95% confidence, for shroud or test unit TCs in abnormal thermal environments, is about +-2% of the reading in K, lower than the +-3% typically used for flat shrouds. Recommendations are provided in Section 6 to facilitate interpretation and use of the results. .« less

  2. The forecast for RAC extrapolation: mostly cloudy.

    PubMed

    Goldman, Elizabeth; Jacobs, Robert; Scott, Ellen; Scott, Bonnie

    2011-09-01

    The current statutory and regulatory guidance for recovery audit contractor (RAC) extrapolation leaves providers with minimal protection against the process and a limited ability to challenge overpayment demands. Providers not only should understand the statutory and regulatory basis for extrapolation forecast, but also should be able to assess their extrapolation risk and their recourse through regulatory safeguards against contractor error. Providers also should aggressively appeal all incorrect RAC denials to minimize the potential impact of extrapolation.

  3. Relating Tropical Cyclone Track Forecast Error Distributions with Measurements of Forecast Uncertainty

    DTIC Science & Technology

    2016-03-01

    cyclone THORPEX The Observing System Research and Predictability Experiment TIGGE THORPEX Interactive Grand Global Ensemble TS tropical storm ...forecast possible, but also relay the level of uncertainty unique to a given storm . This will better inform decision makers to help protect all assets at...for any given storm . Thus, the probabilities may 4 increase or decrease (and the probability swath may widen or narrow) to provide a more

  4. Fractal Point Process and Queueing Theory and Application to Communication Networks

    DTIC Science & Technology

    1999-12-31

    use of nonlinear dynamics and chaos in the design of innovative analog error-protection codes for com- munications applications. In the chaos...the fol- lowing theses, patent, and papers. 1. A. Narula, M. D. Trott , and G. W. Wornell, "Information-Theoretic Analysis of Multiple-Antenna...Bounds," in Proc. Int. Conf. Dec. Control, (Japan), Dec. 1996. 5. G. W. Wornell and M. D. Trott , "Efficient Signal Processing Tech- niques for

  5. FIASCO II failure to achieve a satisfactory cardiac outcome study: the elimination of system errors.

    PubMed

    Farid, Shakil; Page, Aravinda; Jenkins, David; Jones, Mark T; Freed, Darren; Nashef, Samer A M

    2013-07-01

    Death in low-risk cardiac surgical patients provides a simple and accessible method by which modifiable causes of death can be identified. In the first FIASCO study published in 2009, local potentially modifiable causes of preventable death in low-risk patients with a logistic EuroSCORE of 0-2 undergoing cardiac surgery were inadequate myocardial protection and lack of clarity in the chain of responsibility. As a result, myocardial protection was improved, and a formalized system introduced to ensure clarity of the chain of responsibility in the care of all cardiac surgical patients. The purpose of the current study was to re-audit outcomes in low-risk patients to see if improvements have been achieved. Patients with a logistic EuroSCORE of 0-2 who had cardiac surgery from January 2006 to August 2012 were included. Data were prospectively collected and retrospectively analysed. The case notes of patients who died in hospital were subject to internal and external review and classified according to preventability. Two thousand five hundred and forty-nine patients with a logistic EuroSCORE of 0-2 underwent cardiac surgery during the study period. Seven deaths occurred in truly low-risk patients, giving a mortality of 0.27%. Of the seven, three were considered preventable and four non-preventable. Mortality was marginally lower than in our previous study (0.37%), and no death occurred as a result of inadequate myocardial protection or communication failures. We postulate that the regular study of such events in all institutions may unmask systemic errors that can be remedied to prevent or reduce future occurrences. We encourage all units to use this methodology to detect any similarly modifiable factors in their practice.

  6. Paradigms for parasite conservation.

    PubMed

    Dougherty, Eric R; Carlson, Colin J; Bueno, Veronica M; Burgio, Kevin R; Cizauskas, Carrie A; Clements, Christopher F; Seidel, Dana P; Harris, Nyeema C

    2016-08-01

    Parasitic species, which depend directly on host species for their survival, represent a major regulatory force in ecosystems and a significant component of Earth's biodiversity. Yet the negative impacts of parasites observed at the host level have motivated a conservation paradigm of eradication, moving us farther from attainment of taxonomically unbiased conservation goals. Despite a growing body of literature highlighting the importance of parasite-inclusive conservation, most parasite species remain understudied, underfunded, and underappreciated. We argue the protection of parasitic biodiversity requires a paradigm shift in the perception and valuation of their role as consumer species, similar to that of apex predators in the mid-20th century. Beyond recognizing parasites as vital trophic regulators, existing tools available to conservation practitioners should explicitly account for the unique threats facing dependent species. We built upon concepts from epidemiology and economics (e.g., host-density threshold and cost-benefit analysis) to devise novel metrics of margin of error and minimum investment for parasite conservation. We define margin of error as the risk of accidental host extinction from misestimating equilibrium population sizes and predicted oscillations, while minimum investment represents the cost associated with conserving the additional hosts required to maintain viable parasite populations. This framework will aid in the identification of readily conserved parasites that present minimal health risks. To establish parasite conservation, we propose an extension of population viability analysis for host-parasite assemblages to assess extinction risk. In the direst cases, ex situ breeding programs for parasites should be evaluated to maximize success without undermining host protection. Though parasitic species pose a considerable conservation challenge, adaptations to conservation tools will help protect parasite biodiversity in the face of an uncertain environmental future. © 2015 Society for Conservation Biology.

  7. The rate of repeating X-rays in the medical centers of Jenin District/Palestine and how to reduce patient exposure to radiation

    NASA Astrophysics Data System (ADS)

    Assi, Abed Al Nasser

    2018-03-01

    Reduction of the patient's received radiation dose to as low as reasonably achievable (ALARA) is based on recommendations of radiation protection organizations such as the International Commission on Radiological Protection (ICRP) and the National Radiological Protection Board (NRPB). The aim of this study was to explore the frequency and characteristics of rejected / repeated radiographic films in governmental and private centers in Jenin city. The radiological centers were chosen based on their high volume of radiographic studies. The evaluation was carried out over a period of four months. The collected data were compiled at the end of each week and entered into a computer for analysis at the end of study. Overall 5000 films (images) were performed in four months, The average repeat rate of radiographic images was 10% (500 films). Repetition rate was the same for both thoracic and abdominal images (42%). The main reason for repeating imaging was inadequate imaging quality (58.2%) and poor film processing (38%). Human error was the most likely reason necessitating the repetition of the radiographs (48 %). Infant and children groups comprised 85% of the patient population that required repetition of the radiographic studies. In conclusion, we have a higher repetition rate of imaging studies compared to the international standards (10% vs. 4-6%, respectively). This is especially noticeable in infants and children, and mainly attributed to human error in obtaining and processing images. This is an important issue that needs to be addressed on a national level due to the ill effects associated with excessive exposure to radiation especially in children, and to reduce cost of the care delivered.

  8. Physicians involved in the care of patients with high risk of skin cancer should be trained regarding sun protection measures: evidence from a cross sectional study.

    PubMed

    Thomas, M; Rioual, E; Adamski, H; Roguedas, A-M; Misery, L; Michel, M; Chastel, F; Schmutz, J-L; Aubin, F; Marguery, M-C; Meyer, N

    2011-01-01

    Knowledge, regarding sun protection, is essential to change behaviour and to reduce sun exposure of patients at risk for skin cancer. Patient education regarding appropriate or sun protection measures, is a priority to reduce skin cancer incidence. The aim of this study was to evaluate the knowledge about sun protection and the recommendations given in a population of non-dermatologists physicians involved in the care of patients at high risk of skin cancer. This study is a cross-sectional study. Physicians were e-mailed an anonymous questionnaire evaluating the knowledge about risk factors for skin cancer, sun protection and about the role of the physician in providing sun protection recommendations. Of the responders, 71.4% considered that the risk of skin cancer of their patients was increased when compared with the general population. All the responders knew that UV-radiations can contribute to induce skin cancers and 71.4% of them declared having adequate knowledge about sun protection measures. A proportion of 64.2% of them declared that they were able to give sun protection advices: using sunscreens (97.8%), wearing covering clothes (95.5%), performing regular medical skin examination (91.1%), to avoid direct sunlight exposure (77.8%), avoiding outdoor activities in the hottest midday hours (73.3%) and practising progressive exposure (44.4%). Non-dermatologist physicians reported a correct knowledge of UV-induced skin cancer risk factors. The majority of responders displayed adequate knowledge of sun protection measures and declared providing patients with sun protection recommendation on a regular basis. Several errors persisted. © 2010 The Authors. Journal of the European Academy of Dermatology and Venereology © 2010 European Academy of Dermatology and Venereology.

  9. Evaluation of the accuracy of the CyberKnife Synchrony™ Respiratory Tracking System using a plastic scintillator.

    PubMed

    Akino, Yuichi; Sumida, Iori; Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshiichi; Hayashida, Miori; Mabuchi, Nobuhisa; Ogawa, Kazuhiko

    2018-06-01

    The Synchrony ™ Respiratory Tracking System of the CyberKnife ® Robotic Radiosurgery System (Accuray, Inc., Sunnyvale CA) enables real-time tracking of moving targets such as lung and liver tumors during radiotherapy. Although film measurements have been used for quality assurance of the tracking system, they cannot evaluate the temporal tracking accuracy. We have developed a verification system using a plastic scintillator that can evaluate the temporal accuracy of the CyberKnife Synchrony. A phantom consisting of a U-shaped plastic frame with three fiducial markers was used. The phantom was moved on a plastic scintillator plate. To identify the phantom position on the recording video in darkness, four pieces of fluorescent tape representing the corners of a 10 cm × 10 cm square around an 8 cm × 8 cm window were attached to the phantom. For a stable respiration model, the phantom was moved with the fourth power of a sinusoidal wave with breathing cycles of 4, 3, and 2 s and an amplitude of 1 cm. To simulate irregular breathing, the respiratory cycle was varied with Gaussian random numbers. A virtual target was generated at the center of the fluorescent markers using the MultiPlan ™ treatment planning system. Photon beams were irradiated using a fiducial tracking technique. In a dark room, the fluorescent light of the markers and the scintillation light of the beam position were recorded using a camera. For each video frame, a homography matrix was calculated from the four fluorescent marker positions, and the beam position derived from the scintillation light was corrected. To correct the displacement of the beam position due to oblique irradiation angles and other systematic measurement errors, offset values were derived from measurements with the phantom held stationary. The average SDs of beam position measured without phantom motion were 0.16 mm and 0.20 mm for lateral and longitudinal directions, respectively. For the stable respiration model, the tracking errors (mean ± SD) were 0.40 ± 0.64 mm, -0.07 ± 0.79 mm, and 0.45 ± 1.14 mm for breathing cycles of 4, 3, and 2 s, respectively. The tracking errors showed significant linear correlation with the phantom velocity. The correlation coefficients were 0.897, 0.913, and 0.957 for breathing cycles of 4, 3, and 2 s, respectively. The unstable respiration model also showed linear correlation between tracking errors and phantom velocity. The probability of tracking error incidents increased with decreasing length of the respiratory cycles. Although the tracking error incidents increased with larger variations in respiratory cycle, the effect on the cumulative probability was insignificant. For a respiratory cycle of 4 s, the maximum tracking error was 1.10 mm and 1.43 mm at the probability of 10% and 5%, respectively. Large tracking errors were observed when there was phase shift between the tumor and the LED marker. This technique allows evaluation of the motion tracking accuracy of the Synchrony ™ system over time by measurement of the photon beam. The velocity of the target and phase shift have significant effects on accuracy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  10. Design Techniques for Power-Aware Combinational Logic SER Mitigation

    NASA Astrophysics Data System (ADS)

    Mahatme, Nihaar N.

    The history of modern semiconductor devices and circuits suggests that technologists have been able to maintain scaling at the rate predicted by Moore's Law [Moor-65]. With improved performance, speed and lower area, technology scaling has also exacerbated reliability issues such as soft errors. Soft errors are transient errors that occur in microelectronic circuits due to ionizing radiation particle strikes on reverse biased semiconductor junctions. These radiation induced errors at the terrestrial-level are caused due to radiation particle strikes by (1) alpha particles emitted as decay products of packing material (2) cosmic rays that produce energetic protons and neutrons, and (3) thermal neutrons [Dodd-03], [Srou-88] and more recently muons and electrons [Ma-79] [Nara-08] [Siew-10] [King-10]. In the space environment radiation induced errors are a much bigger threat and are mainly caused by cosmic heavy-ions, protons etc. The effects of radiation exposure on circuits and measures to protect against them have been studied extensively for the past 40 years, especially for parts operating in space. Radiation particle strikes can affect memory as well as combinational logic. Typically when these particles strike semiconductor junctions of transistors that are part of feedback structures such as SRAM memory cells or flip-flops, it can lead to an inversion of the cell content. Such a failure is formally called a bit-flip or single-event upset (SEU). When such particles strike sensitive junctions part of combinational logic gates they produce transient voltage spikes or glitches called single-event transients (SETs) that could be latched by receiving flip-flops. As the circuits are clocked faster, there are more number of clocking edges which increases the likelihood of latching these transients. In older technology generations the probability of errors in flip-flops due to SETs being latched was much lower compared to direct strikes on flip-flops or SRAMs leading to SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.

  11. Improving health care workers' protection against infection of Ebola hemorrhagic fever through video surveillance.

    PubMed

    Xi, Huijun; Cao, Jie; Liu, Jingjing; Li, Zhaoshen; Kong, Xiangyu; Wang, Yonghua; Chen, Jing; Ma, Su; Zhang, Lingjuan

    2016-08-01

    The purpose of this study was to investigate the importance of supervision through video surveillance in improving the quality of personal protection in preparing health care workers working in Ebola treatment units. Wardens supervise, remind, and guide health care workers' behavior through onsite voice and video systems when they are in the suspected patient observation ward and in the patient diagnosed ward of the Ebola treatment center. The observation results were recorded, and timely feedback was given to the health care workers. After 2 months of supervision, 1,797 cases of incorrect personal protection behaviors were identified and corrected. The error rate continuously declined. The declined rate of the first 2 weeks was statistically different from other time periods. Through reminding and supervising, nonstandard personal protective behaviors can be discovered and corrected, which can help health care workers standardize personal protection. The timely feedback from video surveillance can also offer psychologic support and encouragement promptly to ease psychologic pressure. Finally, this can ensure health care workers stay at a zero infection rate during patient treatment. Personal protective equipment protocol supervised by wardens through a video monitoring process can be used as an effective complement to conventional mutual supervision methods and can help health care workers avoid Ebola infection during treatment. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  12. Psychometric assessment of the processes of change scale for sun protection.

    PubMed

    Sillice, Marie A; Babbin, Steven F; Redding, Colleen A; Rossi, Joseph S; Paiva, Andrea L; Velicer, Wayne F

    2018-01-01

    The fourteen-factor Processes of Change Scale for Sun Protection assesses behavioral and experiential strategies that underlie the process of sun protection acquisition and maintenance. Variations of this measure have been used effectively in several randomized sun protection trials, both for evaluation and as a basis for intervention. However, there are no published studies, to date, that evaluate the psychometric properties of the scale. The present study evaluated factorial invariance and scale reliability in a national sample (N = 1360) of adults involved in a Transtheoretical model tailored intervention for exercise and sun protection, at baseline. Invariance testing ranged from least to most restrictive: Configural Invariance (constraints only factor structure and zero loadings); Pattern Identity Invariance (equal factor loadings across target groups); and Strong Factorial Invariance (equal factor loadings and measurement errors). Multi-sample structural equation modeling tested the invariance of the measurement model across seven subgroups: age, education, ethnicity, gender, race, skin tone, and Stage of Change for Sun Protection. Strong factorial invariance was found across all subgroups. Internal consistency coefficient Alpha and factor rho reliability, respectively, were .83 and .80 for behavioral processes, .91 and .89 for experiential processes, and .93 and .91 for the global scale. These results provide strong empirical evidence that the scale is consistent, has internal validity and can be used in research interventions with population-based adult samples.

  13. Smart Grid Privacy through Distributed Trust

    NASA Astrophysics Data System (ADS)

    Lipton, Benjamin

    Though the smart electrical grid promises many advantages in efficiency and reliability, the risks to consumer privacy have impeded its deployment. Researchers have proposed protecting privacy by aggregating user data before it reaches the utility, using techniques of homomorphic encryption to prevent exposure of unaggregated values. However, such schemes generally require users to trust in the correct operation of a single aggregation server. We propose two alternative systems based on secret sharing techniques that distribute this trust among multiple service providers, protecting user privacy against a misbehaving server. We also provide an extensive evaluation of the systems considered, comparing their robustness to privacy compromise, error handling, computational performance, and data transmission costs. We conclude that while all the systems should be computationally feasible on smart meters, the two methods based on secret sharing require much less computation while also providing better protection against corrupted aggregators. Building systems using these techniques could help defend the privacy of electricity customers, as well as customers of other utilities as they move to a more data-driven architecture.

  14. Prioritized packet video transmission over time-varying wireless channel using proactive FEC

    NASA Astrophysics Data System (ADS)

    Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay

    2000-12-01

    Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.

  15. Optimization of Fish Protection System to Increase Technosphere Safety

    NASA Astrophysics Data System (ADS)

    Khetsuriani, E. D.; Fesenko, L. N.; Larin, D. S.

    2017-11-01

    The article is concerned with field study data. Drawing upon prior information and considering structural features of fish protection devices, we decided to conduct experimental research while changing three parameters: process pressure PCT, stream velocity Vp and washer nozzle inclination angle αc. The variability intervals of examined factors are shown in the Table 1. The conicity angle was assumed as a constant one. The box design B3 was chosen as a baseline being close to D-optimal designs in its statistical characteristics. The number of device rotations and its fish fry protection efficiency were accepted as the output functions of optimization. The numerical values of regression coefficients of quadratic equations describing the behavior of optimization functions Y1 and Y2 and their formulaic errors were calculated upon the test results in accordance with the planning matrix. The adequacy or inadequacy of the obtained quadratic regression model is judged via checking the condition whether Fexp ≤ Ftheor.

  16. Elasto-plastic bond mechanics of embedded fiber optic sensors in concrete under uniaxial tension with strain localization

    NASA Astrophysics Data System (ADS)

    Li, Qingbin; Li, Guang; Wang, Guanglun

    2003-12-01

    Brittleness of the glass core inside fiber optic sensors limits their practical usage, and therefore they are coated with low-modulus softer protective materials. Protective coatings absorb a portion of the strain, and hence part of the structural strain is sensed. The study reported here corrects for this error through development of a theoretical model to account for the loss of strain in the protective coating of optical fibers. The model considers the coating as an elasto-plastic material and formulates strain transfer coefficients for elastic, elasto-plastic and strain localization phases of coating deformations in strain localization in concrete. The theoretical findings were verified through laboratory experimentation. The experimental program involved fabrication of interferometric optical fiber sensors, embedding within mortar samples and tensile tests in a closed-loop servo-hydraulic testing machine. The elasto-plastic strain transfer coefficients were employed for correction of optical fiber sensor data and results were compared with those of conventional extensometers.

  17. Fault tolerance in space-based digital signal processing and switching systems: Protecting up-link processing resources, demultiplexer, demodulator, and decoder

    NASA Technical Reports Server (NTRS)

    Redinbo, Robert

    1994-01-01

    Fault tolerance features in the first three major subsystems appearing in the next generation of communications satellites are described. These satellites will contain extensive but efficient high-speed processing and switching capabilities to support the low signal strengths associated with very small aperture terminals. The terminals' numerous data channels are combined through frequency division multiplexing (FDM) on the up-links and are protected individually by forward error-correcting (FEC) binary convolutional codes. The front-end processing resources, demultiplexer, demodulators, and FEC decoders extract all data channels which are then switched individually, multiplexed, and remodulated before retransmission to earth terminals through narrow beam spot antennas. Algorithm based fault tolerance (ABFT) techniques, which relate real number parity values with data flows and operations, are used to protect the data processing operations. The additional checking features utilize resources that can be substituted for normal processing elements when resource reconfiguration is required to replace a failed unit.

  18. Development of a model protection and dynamic response monitoring system for the national transonic facility

    NASA Technical Reports Server (NTRS)

    Young, Clarence P., Jr.; Balakrishna, S.; Kilgore, W. Allen

    1995-01-01

    A state-of-the-art, computerized mode protection and dynamic response monitoring system has been developed for the NASA Langley Research Center National Transonic Facility (NTF). This report describes the development of the model protection and shutdown system (MPSS). A technical description of the system is given along with discussions on operation and capabilities of the system. Applications of the system to vibration problems are presented to demonstrate the system capabilities, typical applications, versatility, and investment research return derived from the system to date. The system was custom designed for the NTF but can be used at other facilities or for other dynamic measurement/diagnostic applications. Potential commercial uses of the system are described. System capability has been demonstrated for forced response testing and for characterizing and quantifying bias errors for onboard inertial model attitude measurement devices. The system is installed in the NTF control room and has been used successfully for monitoring, recording and analyzing the dynamic response of several model systems tested in the NTF.

  19. Uses and biases of volunteer water quality data

    USGS Publications Warehouse

    Loperfido, J.V.; Beyer, P.; Just, C.L.; Schnoor, J.L.

    2010-01-01

    State water quality monitoring has been augmented by volunteer monitoring programs throughout the United States. Although a significant effort has been put forth by volunteers, questions remain as to whether volunteer data are accurate and can be used by regulators. In this study, typical volunteer water quality measurements from laboratory and environmental samples in Iowa were analyzed for error and bias. Volunteer measurements of nitrate+nitrite were significantly lower (about 2-fold) than concentrations determined via standard methods in both laboratory-prepared and environmental samples. Total reactive phosphorus concentrations analyzed by volunteers were similar to measurements determined via standard methods in laboratory-prepared samples and environmental samples, but were statistically lower than the actual concentration in four of the five laboratory-prepared samples. Volunteer water quality measurements were successful in identifying and classifying most of the waters which violate United States Environmental Protection Agency recommended water quality criteria for total nitrogen (66%) and for total phosphorus (52%) with the accuracy improving when accounting for error and biases in the volunteer data. An understanding of the error and bias in volunteer water quality measurements can allow regulators to incorporate volunteer water quality data into total maximum daily load planning or state water quality reporting. ?? 2010 American Chemical Society.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  1. Gridded National Inventory of U.S. Methane Emissions

    NASA Technical Reports Server (NTRS)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; hide

    2016-01-01

    We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  2. Gridded national inventory of U.S. methane emissions

    DOE PAGES

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; ...

    2016-11-16

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  3. Gridded National Inventory of U.S. Methane Emissions.

    PubMed

    Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L

    2016-12-06

    We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  4. 76 FR 54932 - Revisions and Additions to Motor Vehicle Fuel Economy Label; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-06

    ...The Environmental Protection Agency and the Department of Transportation published a final rule regarding labeling of cars and trucks with fuel economy and environmental information in the Federal Register on July 6, 2011 (76 FR 39478). An error in the amendatory instruction for Sec. 86.1867-12 inadvertently calls for the removal of paragraph (a)(3)(iv)(A) of that section. This rule revises the amendatory language for consistency with the regulatory text.

  5. NSLS-II BPM System Protection from Rogue Mode Coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blednykh, A.; Bach, B.; Borrelli, A.

    2011-03-28

    Rogue mode RF shielding has been successfully designed and implemented into the production multipole vacuum chambers. In order to avoid systematic errors in the NSLS-II BPM system we introduced frequency shift of HOM's by using RF metal shielding located in the antechamber slot of each multipole vacuum chamber. To satisfy the pumping requirement the face of the shielding has been perforated with roughly 50 percent transparency. It stays clear of synchrotron radiation in each chamber.

  6. Securing insurance protection against fraud and abuse liability.

    PubMed

    Callison, S

    1999-07-01

    Healthcare organizations concerned about corporate compliance need to review securing appropriate insurance coverage as part of their corporate compliance program. Provider organizations often mistakenly expect that their directors and officers liability (D&O), malpractice, or standard errors and omissions (E&O) insurance policies will cover the cost of Medicare fraud and abuse fines. The insurance industry has developed a specific billing E&O insurance product to cover providers that run afoul of government fraud and abuse statutes.

  7. Subversion: The Neglected Aspect of Computer Security.

    DTIC Science & Technology

    1980-06-01

    fundamentally flawed. Recall from mathematics that it is sufficient to disprove a4 proposition (e.g., that a system is secure) by showing only one example where...made. This lack of protection is one of the fundamental reasons why the subversion of computer systems can be so effective. Later chapters will amplify...an area of code that will not be liable to revision. Operatine system software, as pointed out earlier, is often riddled with design errors or subject

  8. DRMS World, Volume 33, Number 4, Fall 2008

    DTIC Science & Technology

    2008-01-01

    love tors)." to hear. But what does she ask in return? EXCHANGE PROGRAM What responsibility comes with that empower- "It’s an age old problem ...said. hunt without a map. Databases By bringing all of the data sources such as MIDAS , DAISY, together there was less chance for error FEDLOG and FLIS...example, slips and falls have program. THE VOLUNTARY PROTECTION PROGRAM HAS always been a perpetual problem at More informa- BEEN IMPLEMENTED AT THE

  9. Reliable video transmission over fading channels via channel state estimation

    NASA Astrophysics Data System (ADS)

    Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay

    2000-04-01

    Transmission of continuous media such as video over time- varying wireless communication channels can benefit from the use of adaptation techniques in both source and channel coding. An adaptive feedback-based wireless video transmission scheme is investigated in this research with special emphasis on feedback-based adaptation. To be more specific, an interactive adaptive transmission scheme is developed by letting the receiver estimate the channel state information and send it back to the transmitter. By utilizing the feedback information, the transmitter is capable of adapting the level of protection by changing the flexible RCPC (rate-compatible punctured convolutional) code ratio depending on the instantaneous channel condition. The wireless channel is modeled as a fading channel, where the long-term and short- term fading effects are modeled as the log-normal fading and the Rayleigh flat fading, respectively. Then, its state (mainly the long term fading portion) is tracked and predicted by using an adaptive LMS (least mean squares) algorithm. By utilizing the delayed feedback on the channel condition, the adaptation performance of the proposed scheme is first evaluated in terms of the error probability and the throughput. It is then extended to incorporate variable size packets of ITU-T H.263+ video with the error resilience option. Finally, the end-to-end performance of wireless video transmission is compared against several non-adaptive protection schemes.

  10. Injury prophylaxis in paragliding

    PubMed Central

    Schulze, W; Richter, J; Schulze, B; Esenwein, S; Buttner-Janz, K

    2002-01-01

    Objectives: To show trends in paragliding injuries and derive recommendations for safety precautions for paraglider pilots on the basis of accident statistics, interviews, questionnaires, medical reports, and current stage of development of paragliding equipment. Methods: All paragliding accidents in Germany have to be reported. Information on 409 accidents was collected and analysed for the period 1997–1999. Results: There was a substantial decrease in reported accidents (166 in 1997; 127 in 1998; 116 in 1999). The number of accidents resulting in spinal injuries was 62 in 1997, 42 in 1998, and 38 in 1999. The most common cause of accident was deflation of the glider (32.5%), followed by oversteering (13.9%), collision with obstacles (12.0%), take off errors (10.3%), landing errors (13.7%), misjudgment of weather conditions (4.9%), unsatisfactory preflight checks (4.9%), mid-air collisions with other flyers (2.2%), accidents during winching (2.2%), and defective equipment (0.5%). Accidents predominantly occurred in mountain areas. Fewer than 100 flights had been logged for 40% of injured pilots. In a total of 39 accidents in which emergency parachutes were used, 10 pilots were seriously injured (26%) and an additional three were killed (8%). Conclusions: Injuries in paragliding caused by unpredictable situations can be minimised by (a) using safer gliders in the beginner or intermediate category, (b) improving protection systems, such as padded back protection, and (c) improving pilot skills through performance and safety training. PMID:12351336

  11. Prevalence of sunburn, sun protection, and indoor tanning behaviors among Americans: review from national surveys and case studies of 3 states.

    PubMed

    Buller, David B; Cokkinides, Vilma; Hall, H Irene; Hartman, Anne M; Saraiya, Mona; Miller, Eric; Paddock, Lisa; Glanz, Karen

    2011-11-01

    Exposure to ultraviolet radiation (from solar and nonsolar sources) is a risk factor for skin cancer. We sought to summarize recent estimates on sunburns, sun-protection behaviors, and indoor tanning available from national and selected statewide behavioral surveys. Estimates of the prevalence of sunburn, sun-protection behaviors, and indoor tanning by US adults, adolescents, and children collected in national surveys in 1992, 2004 to 2005, and 2007 to 2009 were identified and extracted from searches of computerized databases (ie, MEDLINE and PsychINFO), reference lists, and survey World Wide Web sites. Sunburn estimates from 3 state Behavioral Risk Factors Surveillance Systems were also analyzed. Latest published estimates (2005) showed that 34.4% of US adults were sunburned in the past year. Incidence of sunburns was highest among men, non-Hispanic whites, young adults, and high-income groups in national surveys. About 3 in 10 adults routinely practiced sun-protection behaviors, and women and older adults took the most precautions. Among adolescents, 69% were sunburned in the previous summer and less than 40% practiced sun protection. Approximately 60% of parents applied sunscreen and a quarter used shade to protect children. Indoor tanning was prevalent among younger adults and females. Limitations include potential recall errors and social desirability in self-report measures, and lack of current data on children. Many Americans experienced sunburns and a minority engaged in protective behaviors. Females and older adults were most vigilant about sun protection. Substantial proportions of young women and adolescents recently used indoor tanning. Future efforts should promote protective hats, clothing, and shade; motivate males and younger populations to take precautions; and convince women and adolescents to reduce indoor tanning. Copyright © 2011 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  12. Study on UKF based federal integrated navigation for high dynamic aviation

    NASA Astrophysics Data System (ADS)

    Zhao, Gang; Shao, Wei; Chen, Kai; Yan, Jie

    2011-08-01

    High dynamic aircraft is a very attractive new generation vehicles, in which provides near space aviation with large flight envelope both speed and altitude, for example the hypersonic vehicles. The complex flight environments for high dynamic vehicles require high accuracy and stability navigation scheme. Since the conventional Strapdown Inertial Navigation System (SINS) and Global Position System (GPS) federal integrated scheme based on EKF (Extended Kalman Filter) is invalidation in GPS single blackout situation because of high speed flight, a new high precision and stability integrated navigation approach is presented in this paper, in which the SINS, GPS and Celestial Navigation System (CNS) is combined as a federal information fusion configuration based on nonlinear Unscented Kalman Filter (UKF) algorithm. Firstly, the new integrated system state error is modeled. According to this error model, the SINS system is used as the navigation solution mathematic platform. The SINS combine with GPS constitute one error estimation filter subsystem based on UKF to obtain local optimal estimation, and the SINS combine with CNS constitute another error estimation subsystem. A non-reset federated configuration filter based on partial information is proposed to fuse two local optimal estimations to get global optimal error estimation, and the global optimal estimation is used to correct the SINS navigation solution. The χ 2 fault detection method is used to detect the subsystem fault, and the fault subsystem is isolation through fault interval to protect system away from the divergence. The integrated system takes advantages of SINS, GPS and CNS to an immense improvement for high accuracy and reliably high dynamic navigation application. Simulation result shows that federated fusion of using GPS and CNS to revise SINS solution is reasonable and availably with good estimation performance, which are satisfied with the demands of high dynamic flight navigation. The UKF is superior than EKF based integrated scheme, in which has smaller estimation error and quickly convergence rate.

  13. Validation of the firefighter WFI treadmill protocol for predicting VO2 max.

    PubMed

    Dolezal, B A; Barr, D; Boland, D M; Smith, D L; Cooper, C B

    2015-03-01

    The Wellness-Fitness Initiative submaximal treadmill exercise test (WFI-TM) is recommended by the US National Fire Protection Agency to assess aerobic capacity (VO2 max) in firefighters. However, predicting VO2 max from submaximal tests can result in errors leading to erroneous conclusions about fitness. To investigate the level of agreement between VO2 max predicted from the WFI-TM against its direct measurement using exhaled gas analysis. The WFI-TM was performed to volitional fatigue. Differences between estimated VO2 max (derived from the WFI-TM equation) and direct measurement (exhaled gas analysis) were compared by paired t-test and agreement was determined using Pearson Product-Moment correlation and Bland-Altman analysis. Statistical significance was set at P < 0.05. Fifty-nine men performed the WFI-TM. Mean (standard deviation) values for estimated and measured VO2 max were 44.6 (3.4) and 43.6 (7.9) ml/kg/min, respectively (P < 0.01). The mean bias by which WFI-TM overestimated VO2 max was 0.9ml/kg/min with a 95% prediction interval of ±13.1. Prediction errors for 22% of subjects were within ±5%; 36% had errors greater than or equal to ±15% and 7% had greater than ±30% errors. The correlation between predicted and measured VO2 max was r = 0.55 (standard error of the estimate = 2.8ml/kg/min). WFI-TM predicts VO2 max with 11% error. There is a tendency to overestimate aerobic capacity in less fit individuals and to underestimate it in more fit individuals leading to a clustering of values around 42ml/kg/min, a criterion used by some fire departments to assess fitness for duty. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Breakdowns in communication of radiological findings: an ethical and medico-legal conundrum

    PubMed Central

    Murphy, Daniel R.; Singh, Hardeep

    2016-01-01

    Communication problems in diagnostic testing have increased in both number and importance in recent years. The medical and legal impact of failure of communication is dramatic. Over the past decades, the courts have expanded and strengthened the duty imposed on radiologists to timely communicate radiologic abnormalities to referring physicians and perhaps the patients themselves in certain situations. The need to communicate these findings goes beyond strict legal requirements: there is a moral imperative as well. The Code of Medical Ethics of the American Medical Association points out that “Ethical values and legal principles are usually closely related, but ethical obligations typically exceed legal duties.” Thus, from the perspective of the law, radiologists are required to communicate important unexpected findings to referring physicians in a timely fashion, or alternatively to the patients themselves. From a moral perspective, radiologists should want to effect such communications. Practice standards, moral values, and ethical statements from professional medical societies call for full disclosure of medical errors to patients affected by them. Surveys of radiologists and non-radiologic physicians reveal that only few would divulge all aspects of the error to the patient. In order to encourage physicians to disclose errors to patients and assist in protecting them in some manner if malpractice litigation follows, more than 35 states have passed laws that do not allow a physician’s admission of an error and apologetic statements to be revealed in the courtroom. Whether such disclosure increases or decreases the likelihood of a medical malpractice lawsuit is unclear, but ethical and moral considerations enjoin physicians to disclose errors and offer apologies. PMID:27006891

  15. Hydrologic Design in the Anthropocene

    NASA Astrophysics Data System (ADS)

    Vogel, R. M.; Farmer, W. H.; Read, L.

    2014-12-01

    In an era dubbed the Anthropocene, the natural world is being transformed by a myriad of human influences. As anthropogenic impacts permeate hydrologic systems, hydrologists are challenged to fully account for such changes and develop new methods of hydrologic design. Deterministic watershed models (DWM), which can account for the impacts of changes in land use, climate and infrastructure, are becoming increasing popular for the design of flood and/or drought protection measures. As with all models that are calibrated to existing datasets, DWMs are subject to model error or uncertainty. In practice, the model error component of DWM predictions is typically ignored yet DWM simulations which ignore model error produce model output which cannot reproduce the statistical properties of the observations they are intended to replicate. In the context of hydrologic design, we demonstrate how ignoring model error can lead to systematic downward bias in flood quantiles, upward bias in drought quantiles and upward bias in water supply yields. By reincorporating model error, we document how DWM models can be used to generate results that mimic actual observations and preserve their statistical behavior. In addition to use of DWM for improved predictions in a changing world, improved communication of the risk and reliability is also needed. Traditional statements of risk and reliability in hydrologic design have been characterized by return periods, but such statements often assume that the annual probability of experiencing a design event remains constant throughout the project horizon. We document the general impact of nonstationarity on the average return period and reliability in the context of hydrologic design. Our analyses reveal that return periods do not provide meaningful expressions of the likelihood of future hydrologic events. Instead, knowledge of system reliability over future planning horizons can more effectively prepare society and communicate the likelihood of future hydrologic events of interest.

  16. Delivery of tidal volume from four anaesthesia ventilators during volume-controlled ventilation: a bench study.

    PubMed

    Wallon, G; Bonnet, A; Guérin, C

    2013-06-01

    Tidal volume (V(T)) must be accurately delivered by anaesthesia ventilators in the volume-controlled ventilation mode in order for lung protective ventilation to be effective. However, the impact of fresh gas flow (FGF) and lung mechanics on delivery of V(T) by the newest anaesthesia ventilators has not been reported. We measured delivered V(T) (V(TI)) from four anaesthesia ventilators (Aisys™, Flow-i™, Primus™, and Zeus™) on a pneumatic test lung set with three combinations of lung compliance (C, ml cm H2O(-1)) and resistance (R, cm H2O litre(-1) s(-2)): C60R5, C30R5, C60R20. For each CR, three FGF rates (0.5, 3, 10 litre min(-1)) were investigated at three set V(T)s (300, 500, 800 ml) and two values of PEEP (0 and 10 cm H2O). The volume error = [(V(TI) - V(Tset))/V(Tset)] ×100 was computed in body temperature and pressure-saturated conditions and compared using analysis of variance. For each CR and each set V(T), the absolute value of the volume error significantly declined from Aisys™ to Flow-i™, Zeus™, and Primus™. For C60R5, these values were 12.5% for Aisys™, 5% for Flow-i™ and Zeus™, and 0% for Primus™. With an increase in FGF, absolute values of the volume error increased only for Aisys™ and Zeus™. However, in C30R5, the volume error was minimal at mid-FGF for Aisys™. The results were similar at PEEP 10 cm H2O. Under experimental conditions, the volume error differed significantly between the four new anaesthesia ventilators tested and was influenced by FGF, although this effect may not be clinically relevant.

  17. Least reliable bits coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  18. How do patients respond to violation of their information privacy?

    PubMed

    Kuo, Kuang-Ming; Ma, Chen-Chung; Alexander, Judith W

    The introduction of electronic medical records (EMRs) can expose patients to the risk of infringement of their privacy. The purpose of this study was to explore the relationship between patients' concerns about information privacy and their protective responses. A questionnaire survey conducted in a Taiwanese hospital revealed that, regarding information privacy, patients' concerns about the collection of information about themselves, the secondary use of this information and the possibility of errors in the recorded information were associated with their information privacy-protective responses, while concern for unauthorised access to their information by other staff in the medical facility was not. Medical facilities should devote every effort to alleviate patients' concerns about the invasion of their information privacy to avoid eroding the reputation of medical facilities and impeding the promotion of EMRs.

  19. Blind topological measurement-based quantum computation.

    PubMed

    Morimae, Tomoyuki; Fujii, Keisuke

    2012-01-01

    Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3 × 10(-3), which is comparable to that (7.5 × 10(-3)) of non-blind topological quantum computation. As the error per gate of the order 10(-3) was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.

  20. Quantifying light exposure patterns in young adult students

    NASA Astrophysics Data System (ADS)

    Alvarez, Amanda A.; Wildsoet, Christine F.

    2013-08-01

    Exposure to bright light appears to be protective against myopia in both animals (chicks, monkeys) and children, but quantitative data on human light exposure are limited. In this study, we report on a technique for quantifying light exposure using wearable sensors. Twenty-seven young adult subjects wore a light sensor continuously for two weeks during one of three seasons, and also completed questionnaires about their visual activities. Light data were analyzed with respect to refractive error and season, and the objective sensor data were compared with subjects' estimates of time spent indoors and outdoors. Subjects' estimates of time spent indoors and outdoors were in poor agreement with durations reported by the sensor data. The results of questionnaire-based studies of light exposure should thus be interpreted with caution. The role of light in refractive error development should be investigated using multiple methods such as sensors to complement questionnaires.

  1. A 'Quad-Disc' static pressure probe for measurement in adverse atmospheres - With a comparative review of static pressure probe designs

    NASA Astrophysics Data System (ADS)

    Nishiyama, Randall T.; Bedard, Alfred J., Jr.

    1991-09-01

    There are many areas of need for accurate measurements of atmospheric static pressure. These include observations of surface meteorology, airport altimeter settings, pressure distributions around buildings, moving measurement platforms, as well as basic measurements of fluctuating pressures in turbulence. Most of these observations require long-term observations in adverse environments (e.g., rain, dust, or snow). Currently, many pressure measurements are made, of necessity, within buildings, thus involving potential errors of several millibars in mean pressure during moderate winds, accompanied by large fluctuating pressures induced by the structure. In response to these needs, a 'Quad-Disk' pressure probe for continuous, outdoor monitoring purposes was designed which is inherently weather-protected. This Quad-Disk probe has the desirable features of omnidirectional response and small error in pitch. A review of past static pressure probes contrasts design approaches and capabilities.

  2. Two-Photon Laser-Induced Fluorescence O and N Atoms for the Study of Heterogeneous Catalysis in a Diffusion Reactor

    NASA Technical Reports Server (NTRS)

    Pallix, Joan B.; Copeland, Richard A.; Arnold, James O. (Technical Monitor)

    1995-01-01

    Advanced laser-based diagnostics have been developed to examine catalytic effects and atom/surface interactions on thermal protection materials. This study establishes the feasibility of using laser-induced fluorescence for detection of O and N atom loss in a diffusion tube to measure surface catalytic activity. The experimental apparatus is versatile in that it allows fluorescence detection to be used for measuring species selective recombination coefficients as well as diffusion tube and microwave discharge diagnostics. Many of the potential sources of error in measuring atom recombination coefficients by this method have been identified and taken into account. These include scattered light, detector saturation, sample surface cleanliness, reactor design, gas pressure and composition, and selectivity of the laser probe. Recombination coefficients and their associated errors are reported for N and O atoms on a quartz surface at room temperature.

  3. Quantifying light exposure patterns in young adult students

    PubMed Central

    Alvarez, Amanda A.; Wildsoet, Christine F.

    2014-01-01

    Exposure to bright light appears to be protective against myopia in both animals (chicks, monkeys) and children, but quantitative data on human light exposure are limited. In this study, we report on a technique for quantifying light exposure using wearable sensors. Twenty-seven young adult subjects wore a light sensor continuously for two weeks during one of three seasons, and also completed questionnaires about their visual activities. Light data were analyzed with respect to refractive error and season, and the objective sensor data were compared with subjects’ estimates of time spent indoors and outdoors. Subjects’ estimates of time spent indoors and outdoors were in poor agreement with durations reported by the sensor data. The results of questionnaire-based studies of light exposure should thus be interpreted with caution. The role of light in refractive error development should be investigated using multiple methods such as sensors to complement questionnaires. PMID:25342873

  4. Blind topological measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki; Fujii, Keisuke

    2012-09-01

    Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3×10-3, which is comparable to that (7.5×10-3) of non-blind topological quantum computation. As the error per gate of the order 10-3 was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.

  5. Integrity modelling of tropospheric delay models

    NASA Astrophysics Data System (ADS)

    Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó

    2017-04-01

    The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual error of tropospheric delays, the mathematical formulation of the overbounding models are currently under development. This study introduces the main findings of the residual error analysis of the studied tropospheric delay models, and discusses the preliminary analysis of the integrity model development for safety-of-life applications.

  6. Getting the right blood to the right patient: the contribution of near-miss event reporting and barrier analysis.

    PubMed

    Kaplan, H S

    2005-11-01

    Safety and reliability in blood transfusion are not static, but are dynamic non-events. Since performance deviations continually occur in complex systems, their detection and correction must be accomplished over and over again. Non-conformance must be detected early enough to allow for recovery or mitigation. Near-miss events afford early detection of possible system weaknesses and provide an early chance at correction. National event reporting systems, both voluntary and involuntary, have begun to include near-miss reporting in their classification schemes, raising awareness for their detection. MERS-TM is a voluntary safety reporting initiative in transfusion. Currently 22 hospitals submit reports anonymously to a central database which supports analysis of a hospital's own data and that of an aggregate database. The system encourages reporting of near-miss events, where the patient is protected from receiving an unsuitable or incorrect blood component due to a planned or unplanned recovery step. MERS-TM data suggest approximately 90% of events are near-misses, with 10% caught after issue but before transfusion. Near-miss reporting may increase total reports ten-fold. The ratio of near-misses to events with harm is 339:1, consistent with other industries' ratio of 300:1, which has been proposed as a measure of reporting in event reporting systems. Use of a risk matrix and an event's relation to protective barriers allow prioritization of these events. Near-misses recovered by planned barriers occur ten times more frequently then unplanned recoveries. A bedside check of the patient's identity with that on the blood component is an essential, final barrier. How the typical two person check is performed, is critical. Even properly done, this check is ineffective against sampling and testing errors. Blood testing at bedside just prior to transfusion minimizes the risk of such upstream events. However, even with simple and well designed devices, training may be a critical issue. Sample errors account for more than half of reported events. The most dangerous miscollection is a blood sample passing acceptance with no previous patient results for comparison. Bar code labels or collection of a second sample may counter this upstream vulnerability. Further upstream barriers have been proposed to counter the precariousness of urgent blood sample collection in a changing unstable situation. One, a linking device, allows safer labeling of tubes away from the bedside, the second, a forcing function, prevents omission of critical patient identification steps. Errors in the blood bank itself account for 15% of errors with a high potential severity. In one such event, a component incorrectly issued, but safely detected prior to transfusion, focused attention on multitasking's contribution to laboratory error. In sum, use of near-miss information, by enhancing barriers supporting error prevention and mitigation, increases our capacity to get the right blood to the right patient.

  7. Assessing the statistical significance of the achieved classification error of classifiers constructed using serum peptide profiles, and a prescription for random sampling repeated studies for massive high-throughput genomic and proteomic studies.

    PubMed

    Lyons-Weiler, James; Pelikan, Richard; Zeh, Herbert J; Whitcomb, David C; Malehorn, David E; Bigbee, William L; Hauskrecht, Milos

    2005-01-01

    Peptide profiles generated using SELDI/MALDI time of flight mass spectrometry provide a promising source of patient-specific information with high potential impact on the early detection and classification of cancer and other diseases. The new profiling technology comes, however, with numerous challenges and concerns. Particularly important are concerns of reproducibility of classification results and their significance. In this work we describe a computational validation framework, called PACE (Permutation-Achieved Classification Error), that lets us assess, for a given classification model, the significance of the Achieved Classification Error (ACE) on the profile data. The framework compares the performance statistic of the classifier on true data samples and checks if these are consistent with the behavior of the classifier on the same data with randomly reassigned class labels. A statistically significant ACE increases our belief that a discriminative signal was found in the data. The advantage of PACE analysis is that it can be easily combined with any classification model and is relatively easy to interpret. PACE analysis does not protect researchers against confounding in the experimental design, or other sources of systematic or random error. We use PACE analysis to assess significance of classification results we have achieved on a number of published data sets. The results show that many of these datasets indeed possess a signal that leads to a statistically significant ACE.

  8. Prevalence and associated risk factors of undercorrected refractive errors among people with diabetes in Shanghai.

    PubMed

    Zhu, Mengjun; Tong, Xiaowei; Zhao, Rong; He, Xiangui; Zhao, Huijuan; Zhu, Jianfeng

    2017-11-28

    To investigate the prevalence and risk factors of undercorrected refractive error (URE) among people with diabetes in the Baoshan District of Shanghai, where data for undercorrected refractive error are limited. The study was a population-based survey of 649 persons (aged 60 years or older) with diabetes in Baoshan, Shanghai in 2009. One copy of the questionnaire was completed for each subject. Examinations included a standardized refraction and measurement of presenting and best-corrected visual acuity (BCVA), tonometry, slit lamp biomicroscopy, and fundus photography. The calculated age-standardized prevalence rate of URE was 16.63% (95% confidence interval [CI] 13.76-19.49). For visual impairment subjects (presenting vision worse than 20/40 in the better eye), the prevalence of URE was up to 61.11%, and 75.93% of subjects could achieve visual acuity improvement by at least one line using appropriate spectacles. Under multiple logistic regression analysis, older age, female gender, non-farmer, increasing degree of myopia, lens opacities status, diabetic retinopathy (DR), body mass index (BMI) index lower than normal, and poor glycaemic control were associated with higher URE levels. Wearing distance eyeglasses was a protective factor for URE. The undercorrected refractive error in diabetic adults was high in Shanghai. Health education and regular refractive assessment are needed for diabetic adults. Persons with diabetes should be more aware that poor vision is often correctable, especially for those with risk factors.

  9. Feedback-tuned, noise resilient gates for encoded spin qubits

    NASA Astrophysics Data System (ADS)

    Bluhm, Hendrik

    Spin 1/2 particles form native two level systems and thus lend themselves as a natural qubit implementation. However, encoding a single qubit in several spins entails benefits, such as reducing the resources necessary for qubit control and protection from certain decoherence channels. While several varieties of such encoded spin qubits have been implemented, accurate control remains challenging, and leakage out of the subspace of valid qubit states is a potential issue. Optimal performance typically requires large pulse amplitudes for fast control, which is prone to systematic errors and prohibits standard control approaches based on Rabi flopping. Furthermore, the exchange interaction typically used to electrically manipulate encoded spin qubits is inherently sensitive to charge noise. I will discuss all-electrical, high-fidelity single qubit operations for a spin qubit encoded in two electrons in a GaAs double quantum dot. Starting from a set of numerically optimized control pulses, we employ an iterative tuning procedure based on measured error syndromes to remove systematic errors.Randomized benchmarking yields an average gate fidelity exceeding 98 % and a leakage rate into invalid states of 0.2 %. These gates exhibit a certain degree of resilience to both slow charge and nuclear spin fluctuations due to dynamical correction analogous to a spin echo. Furthermore, the numerical optimization minimizes the impact of fast charge noise. Both types of noise make relevant contributions to gate errors. The general approach is also adaptable to other qubit encodings and exchange based two-qubit gates.

  10. Management under uncertainty: guide-lines for incorporating connectivity into the protection of coral reefs

    NASA Astrophysics Data System (ADS)

    McCook, L. J.; Almany, G. R.; Berumen, M. L.; Day, J. C.; Green, A. L.; Jones, G. P.; Leis, J. M.; Planes, S.; Russ, G. R.; Sale, P. F.; Thorrold, S. R.

    2009-06-01

    The global decline in coral reefs demands urgent management strategies to protect resilience. Protecting ecological connectivity, within and among reefs, and between reefs and other ecosystems is critical to resilience. However, connectivity science is not yet able to clearly identify the specific measures for effective protection of connectivity. This article aims to provide a set of principles or practical guidelines that can be applied currently to protect connectivity. These ‘rules of thumb’ are based on current knowledge and expert opinion, and on the philosophy that, given the urgency, it is better to act with incomplete knowledge than to wait for detailed understanding that may come too late. The principles, many of which are not unique to connectivity, include: (1) allow margins of error in extent and nature of protection, as insurance against unforeseen or incompletely understood threats or critical processes; (2) spread risks among areas; (3) aim for networks of protected areas which are: (a) comprehensive and spread—protect all biotypes, habitats and processes, etc., to capture as many possible connections, known and unknown; (b) adequate—maximise extent of protection for each habitat type, and for the entire region; (c) representative—maximise likelihood of protecting the full range of processes and spatial requirements; (d) replicated—multiple examples of biotypes or processes enhances risk spreading; (4) protect entire biological units where possible (e.g. whole reefs), including buffers around core areas. Otherwise, choose bigger rather than smaller areas; (5) provide for connectivity at a wide range of dispersal distances (within and between patches), emphasising distances <20-30 km; and (6) use a portfolio of approaches, including but not limited to MPAs. Three case studies illustrating the application of these principles to coral reef management in the Bohol Sea (Philippines), the Great Barrier Reef (Australia) and Kimbe Bay (Papua New Guinea) are described.

  11. Development of permissible exposure limits: the California experience.

    PubMed

    Cohen, Richard; Steinmaus, Craig; Quinlan, Patricia; Ku, Robert; Cooper, Michael; Roberts, Tim

    2006-01-01

    The California OSHA Airborne Contaminant Advisory Committee reviewed several hundred substances and recommended occupational exposure limits with the intent of worker and employer protection. The model used offers important benefits. First, by allowing open meetings, the process was transparent, and input could be offered by concerned stakeholders. Second, the process was data-driven and, therefore, less susceptible to bias and error. Third, by incorporating members with backgrounds in toxicology, epidemiology, risk assessment, occupational medicine, and industrial hygiene, the process fostered a thorough and diverse assessment of substances.

  12. Protecting quantum memories using coherent parity check codes

    NASA Astrophysics Data System (ADS)

    Roffe, Joschka; Headley, David; Chancellor, Nicholas; Horsman, Dominic; Kendon, Viv

    2018-07-01

    Coherent parity check (CPC) codes are a new framework for the construction of quantum error correction codes that encode multiple qubits per logical block. CPC codes have a canonical structure involving successive rounds of bit and phase parity checks, supplemented by cross-checks to fix the code distance. In this paper, we provide a detailed introduction to CPC codes using conventional quantum circuit notation. We demonstrate the implementation of a CPC code on real hardware, by designing a [[4, 2, 2

  13. Low-cost enclosure for the sub-millimeter telescope

    NASA Astrophysics Data System (ADS)

    Ulich, B. L.; Hoffmann, W. F.; Davison, W. B.; Baars, J. W. M.; Mezger, P. G.

    1984-01-01

    The University of Arizona and the Max-Planck-Institut fuer Radioastronomie are collaborating to construct a submillimeter-wavelength radio telescope facility at the summit of Mt. Lemmon (2791 m above sea level) near Tucson, Arizona. A corotating building has been designed to protect the 10 m-diameter Submillimeter Telescope against storm damage, to provide large instrumentation rooms at the Nasmyth foci, and to minimize degradation of the reflector profile accuracy and pointing errors caused by wind forces and solar radiation.

  14. Low-Cost Enclosure For The Sub-Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Ulich, Bobby L.; Hoffmann, William F.; Davison, Warren B.; Baars, Jacob W. M.; Mezger, Peter G.

    1983-11-01

    The University of Arizona and the Max-Planck-Institut fur Radioastronomie are collaborating to construct a sub-millimeter wavelength radio telescope facility at the summit of Mt. Lemmon (2791 m above sea level) near Tucson, Arizona. We have designed a corotating building to protect the 10 m diameter Sub-Millimeter Telescope (SMT) against storm damage, to provide large instrumentation rooms at the Nasmyth foci, and to minimize degradation of the reflector profile accuracy and pointing errors caused by wind forces and solar radiation.

  15. Ambient Air Issue from New Jersey Department of Environmental Protection

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  16. Intelligent, Self-Diagnostic Thermal Protection System for Future Spacecraft

    NASA Technical Reports Server (NTRS)

    Hyers, Robert W.; SanSoucie, Michael P.; Pepyne, David; Hanlon, Alaina B.; Deshmukh, Abhijit

    2005-01-01

    The goal of this project is to provide self-diagnostic capabilities to the thermal protection systems (TPS) of future spacecraft. Self-diagnosis is especially important in thermal protection systems (TPS), where large numbers of parts must survive extreme conditions after weeks or years in space. In-service inspections of these systems are difficult or impossible, yet their reliability must be ensured before atmospheric entry. In fact, TPS represents the greatest risk factor after propulsion for any transatmospheric mission. The concepts and much of the technology would be applicable not only to the Crew Exploration Vehicle (CEV), but also to ablative thermal protection for aerocapture and planetary exploration. Monitoring a thermal protection system on a Shuttle-sized vehicle is a daunting task: there are more than 26,000 components whose integrity must be verified with very low rates of both missed faults and false positives. The large number of monitored components precludes conventional approaches based on centralized data collection over separate wires; a distributed approach is necessary to limit the power, mass, and volume of the health monitoring system. Distributed intelligence with self-diagnosis further improves capability, scalability, robustness, and reliability of the monitoring subsystem. A distributed system of intelligent sensors can provide an assurance of the integrity of the system, diagnosis of faults, and condition-based maintenance, all with provable bounds on errors.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holliday, B.

    Information that would allow an assessment of the standard of radiological protection in the United Kingdom is very limited. The Radiological Protection Service (now the National Radiological Protection Board) has provided a monitoring and advisory service to dentists for many years but very limited use has been made of this service. In a recent survey a total of 114 dentists was visited in representative practices in South East England and it was established that only 6.5 per cent of dentists in general practice do not use radiography as an adjunct to their practice (Smith, 1969). In the 88 x-ray setsmore » which were examined, 24 per cent had less than the recommended thickness of aluminium filtration, while 25 per cent had a fixed field size which was larger than necessary for dental radiography, in addition 27 per cent of the timers were found to have an error of greater than 20 per cent in repetition of the pre-set exposure time. These figures are consistent with the results of surveys performed by the National Radiological ProteCtion Board at the request of dentists, but the sample has been small, i.e., a few per cent of dentists in general practice. It is also possible that these results are not typical as the dentist requesting such a survey may have a greater awareness than others of the need for protection.« less

  18. Security in Full-Force

    NASA Technical Reports Server (NTRS)

    2002-01-01

    When fully developed for NASA, Vanguard Enforcer(TM) software-which emulates the activities of highly technical security system programmers, auditors, and administrators-was among the first intrusion detection programs to restrict human errors from affecting security, and to ensure the integrity of a computer's operating systems, as well as the protection of mission critical resources. Vanguard Enforcer was delivered in 1991 to Johnson Space Center and has been protecting systems and critical data there ever since. In August of 1999, NASA granted Vanguard exclusive rights to commercialize the Enforcer system for the private sector. In return, Vanguard continues to supply NASA with ongoing research, development, and support of Enforcer. The Vanguard Enforcer 4.2 is one of several surveillance technologies that make up the Vanguard Security Solutions line of products. Using a mainframe environment, Enforcer 4.2 achieves previously unattainable levels of automated security management.

  19. Quantum information is physical

    NASA Astrophysics Data System (ADS)

    DiVincenzo, D. P.; Loss, D.

    1998-03-01

    We discuss a few current developments in the use of quantum mechanically coherent systems for information processing. In each of these developments, Rolf Landauer has played a crucial role in nudging us, and other workers in the field, into asking the right questions, some of which we have been lucky enough to answer. A general overview of the key ideas of quantum error correction is given. We discuss how quantum entanglement is the key to protecting quantum states from decoherence in a manner which, in a theoretical sense, is as effective as the protection of digital data from bit noise. We also discuss five general criteria which must be satisfied to implement a quantum computer in the laboratory, and we illustrate the application of these criteria by discussing our ideas for creating a quantum computer out of the spin states of coupled quantum dots.

  20. Utilizing photon number parity measurements to demonstrate quantum computation with cat-states in a cavity

    NASA Astrophysics Data System (ADS)

    Petrenko, A.; Ofek, N.; Vlastakis, B.; Sun, L.; Leghtas, Z.; Heeres, R.; Sliwa, K. M.; Mirrahimi, M.; Jiang, L.; Devoret, M. H.; Schoelkopf, R. J.

    2015-03-01

    Realizing a working quantum computer requires overcoming the many challenges that come with coupling large numbers of qubits to perform logical operations. These include improving coherence times, achieving high gate fidelities, and correcting for the inevitable errors that will occur throughout the duration of an algorithm. While impressive progress has been made in all of these areas, the difficulty of combining these ingredients to demonstrate an error-protected logical qubit, comprised of many physical qubits, still remains formidable. With its large Hilbert space, superior coherence properties, and single dominant error channel (single photon loss), a superconducting 3D resonator acting as a resource for a quantum memory offers a hardware-efficient alternative to multi-qubit codes [Leghtas et.al. PRL 2013]. Here we build upon recent work on cat-state encoding [Vlastakis et.al. Science 2013] and photon-parity jumps [Sun et.al. 2014] by exploring the effects of sequential measurements on a cavity state. Employing a transmon qubit dispersively coupled to two superconducting resonators in a cQED architecture, we explore further the application of parity measurements to characterizing such a hybrid qubit/cat state architecture. In so doing, we demonstrate the promise of integrating cat states as central constituents of future quantum codes.

  1. Microwave power transmission system studies. Volume 2: Introduction, organization, environmental and spaceborne systems analyses

    NASA Technical Reports Server (NTRS)

    Maynard, O. E.; Brown, W. C.; Edwards, A.; Haley, J. T.; Meltz, G.; Howell, J. M.; Nathan, A.

    1975-01-01

    Introduction, organization, analyses, conclusions, and recommendations for each of the spaceborne subsystems are presented. Environmental effects - propagation analyses are presented with appendices covering radio wave diffraction by random ionospheric irregularities, self-focusing plasma instabilities and ohmic heating of the D-region. Analyses of dc to rf conversion subsystems and system considerations for both the amplitron and the klystron are included with appendices for the klystron covering cavity circuit calculations, output power of the solenoid-focused klystron, thermal control system, and confined flow focusing of a relativistic beam. The photovoltaic power source characteristics are discussed as they apply to interfacing with the power distribution flow paths, magnetic field interaction, dc to rf converter protection, power distribution including estimates for the power budget, weights, and costs. Analyses for the transmitting antenna consider the aperture illumination and size, with associated efficiencies and ground power distributions. Analyses of subarray types and dimensions, attitude error, flatness, phase error, subarray layout, frequency tolerance, attenuation, waveguide dimensional tolerances, mechanical including thermal considerations are included. Implications associated with transportation, assembly and packaging, attitude control and alignment are discussed. The phase front control subsystem, including both ground based pilot signal driven adaptive and ground command approaches with their associated phase errors, are analyzed.

  2. Review of Significant Incidents and Close Calls in Human Spaceflight from a Human Factors Perspective

    NASA Technical Reports Server (NTRS)

    Silva-Martinez, Jackelynne; Ellenberger, Richard; Dory, Jonathan

    2017-01-01

    This project aims to identify poor human factors design decisions that led to error-prone systems, or did not facilitate the flight crew making the right choices; and to verify that NASA is effectively preventing similar incidents from occurring again. This analysis was performed by reviewing significant incidents and close calls in human spaceflight identified by the NASA Johnson Space Center Safety and Mission Assurance Flight Safety Office. The review of incidents shows whether the identified human errors were due to the operational phase (flight crew and ground control) or if they initiated at the design phase (includes manufacturing and test). This classification was performed with the aid of the NASA Human Systems Integration domains. This in-depth analysis resulted in a tool that helps with the human factors classification of significant incidents and close calls in human spaceflight, which can be used to identify human errors at the operational level, and how they were or should be minimized. Current governing documents on human systems integration for both government and commercial crew were reviewed to see if current requirements, processes, training, and standard operating procedures protect the crew and ground control against these issues occurring in the future. Based on the findings, recommendations to target those areas are provided.

  3. Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence

    PubMed Central

    Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos

    2016-01-01

    When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover’s QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels’ deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover’s QSA at an aggressive depolarizing probability of 10−3, the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane’s quantum error correction code is employed. Finally, apart from Steane’s code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered. PMID:27924865

  4. Quantitative Evaluation of Stereo Visual Odometry for Autonomous Vessel Localisation in Inland Waterway Sensing Applications

    PubMed Central

    Kriechbaumer, Thomas; Blackburn, Kim; Breckon, Toby P.; Hamilton, Oliver; Rivas Casado, Monica

    2015-01-01

    Autonomous survey vessels can increase the efficiency and availability of wide-area river environment surveying as a tool for environment protection and conservation. A key challenge is the accurate localisation of the vessel, where bank-side vegetation or urban settlement preclude the conventional use of line-of-sight global navigation satellite systems (GNSS). In this paper, we evaluate unaided visual odometry, via an on-board stereo camera rig attached to the survey vessel, as a novel, low-cost localisation strategy. Feature-based and appearance-based visual odometry algorithms are implemented on a six degrees of freedom platform operating under guided motion, but stochastic variation in yaw, pitch and roll. Evaluation is based on a 663 m-long trajectory (>15,000 image frames) and statistical error analysis against ground truth position from a target tracking tachymeter integrating electronic distance and angular measurements. The position error of the feature-based technique (mean of ±0.067 m) is three times smaller than that of the appearance-based algorithm. From multi-variable statistical regression, we are able to attribute this error to the depth of tracked features from the camera in the scene and variations in platform yaw. Our findings inform effective strategies to enhance stereo visual localisation for the specific application of river monitoring. PMID:26694411

  5. Design and evaluation of sparse quantization index modulation watermarking schemes

    NASA Astrophysics Data System (ADS)

    Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2008-08-01

    In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

  6. Investigating risky, distracting, and protective peer passenger effects in a dual process framework.

    PubMed

    Ross, Veerle; Jongen, Ellen M M; Brijs, Kris; Brijs, Tom; Wets, Geert

    2016-08-01

    Prior studies indicated higher collision rates among young novice drivers with peer passengers. This driving simulator study provided a test for a dual process theory of risky driving by examining social rewards (peer passengers) and cognitive control (inhibitory control). The analyses included age (17-18 yrs, n=30; 21-24 yrs, n=20). Risky, distracting, and protective effects were classified by underlying driver error mechanisms. In the first drive, participants drove alone. In the second, participants drove with a peer passenger. Red-light running (violation) was more prevalent in the presence of peer passengers, which provided initial support for a dual process theory of risk driving. In a subgroup with low inhibitory control, speeding (violation) was more prevalent in the presence of peer passengers. Reduced lane-keeping variability reflected distracting effects. Nevertheless, possible protective effects for amber-light running and hazard handling (cognition and decision-making) were found in the drive with peer passengers. Avenues for further research and possible implications for targets of future driver training programs are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Carbon dioxide emission tallies for 210 U.S. coal-fired power plants: a comparison of two accounting methods.

    PubMed

    Quick, Jeffrey C

    2014-01-01

    Annual CO2 emission tallies for 210 coal-fired power plants during 2009 were more accurately calculated from fuel consumption records reported by the US. Energy Information Administration (EIA) than measurements from Continuous Emissions Monitoring Systems (CEMS) reported by the US. Environmental Protection Agency. Results from these accounting methods for individual plants vary by +/- 10.8%. Although the differences systematically vary with the method used to certify flue-gas flow instruments in CEMS, additional sources of CEMS measurement error remain to be identified. Limitations of the EIA fuel consumption data are also discussed. Consideration of weighing, sample collection, laboratory analysis, emission factor, and stock adjustment errors showed that the minimum error for CO2 emissions calculated from the fuel consumption data ranged from +/- 1.3% to +/- 7.2% with a plant average of +/- 1.6%. This error might be reduced by 50% if the carbon content of coal delivered to U.S. power plants were reported. Potentially, this study might inform efforts to regulate CO2 emissions (such as CO2 performance standards or taxes) and more immediately, the U.S. Greenhouse Gas Reporting Rule where large coal-fired power plants currently use CEMS to measure CO2 emissions. Moreover, if, as suggested here, the flue-gas flow measurement limits the accuracy of CO2 emission tallies from CEMS, then the accuracy of other emission tallies from CEMS (such as SO2, NOx, and Hg) would be similarly affected. Consequently, improved flue gas flow measurements are needed to increase the reliability of emission measurements from CEMS.

  8. The Significance of the Record Length in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Senarath, S. U.

    2013-12-01

    Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.

  9. Very Early Administration of Progesterone Does Not Improve Neuropsychological Outcomes in Subjects with Moderate to Severe Traumatic Brain Injury.

    PubMed

    Goldstein, Felicia C; Caveney, Angela F; Hertzberg, Vicki S; Silbergleit, Robert; Yeatts, Sharon D; Palesch, Yuko Y; Levin, Harvey S; Wright, David W

    2017-01-01

    A Phase III, double-blind, placebo-controlled trial (ProTECT III) found that administration of progesterone did not reduce mortality or improve functional outcome as measured by the Glasgow Outcome Scale Extended (GOSE) in subjects with moderate to severe traumatic brain injury. We conducted a secondary analysis of neuropsychological outcomes to evaluate whether progesterone is associated with improved recovery of cognitive and motor functioning. ProTECT III was conducted at 49 level I trauma centers in the United States. Adults with moderate to severe TBI were randomized to receive intravenous progesterone or placebo within 4 h of injury for a total of 4 days. At 6 months, subjects underwent evaluation of memory, attention, executive functioning, language, and fine motor coordination/dexterity. Chi-square analysis revealed no significant difference in the proportion of subjects (263/280 progesterone, 283/295 placebo) with Galveston Orientation and Amnesia Test scores ≥75. Analyses of covariance did not reveal significant treatment effects for memory (Buschke immediate recall, p = 0.53; delayed recall, p = 0.94), attention (Trails A speed, p = 0.81 and errors, p = 0.22; Digit Span Forward length, p = 0.66), executive functioning (Trails B speed, p = 0.97 and errors, p = 0.93; Digit Span Backward length, p = 0.60), language (timed phonemic fluency, p = 0.05), and fine motor coordination/dexterity (Grooved Pegboard dominant hand time, p = 0.75 and peg drops, p = 0.59; nondominant hand time, p = 0.74 and peg drops, p = 0.61). Pearson Product Moment Correlations demonstrated significant (p < 0.001) associations between better neuropsychological performance and higher GOSE scores. Similar to the ProTECT III trial's results of the primary outcome, the secondary outcomes do not provide evidence of a neuroprotective effect of progesterone.

  10. Modified Balance Error Scoring System (M-BESS) test scores in athletes wearing protective equipment and cleats.

    PubMed

    Azad, Aftab Mohammad; Al Juma, Saad; Bhatti, Junaid Ahmad; Delaney, J Scott

    2016-01-01

    Balance testing is an important part of the initial concussion assessment. There is no research on the differences in Modified Balance Error Scoring System (M-BESS) scores when tested in real world as compared to control conditions. To assess the difference in M-BESS scores in athletes wearing their protective equipment and cleats on different surfaces as compared to control conditions. This cross-sectional study examined university North American football and soccer athletes. Three observers independently rated athletes performing the M-BESS test in three different conditions: (1) wearing shorts and T-shirt in bare feet on firm surface (control); (2) wearing athletic equipment with cleats on FieldTurf; and (3) wearing athletic equipment with cleats on firm surface. Mean M-BESS scores were compared between conditions. 60 participants were recruited: 39 from football (all males) and 21 from soccer (11 males and 10 females). Average age was 21.1 years (SD=1.8). Mean M-BESS scores were significantly lower (p<0.001) for cleats on FieldTurf (mean=26.3; SD=2.0) and for cleats on firm surface (mean=26.6; SD=2.1) as compared to the control condition (mean=28.4; SD=1.5). Females had lower scores than males for cleats on FieldTurf condition (24.9 (SD=1.9) vs 27.3 (SD=1.6), p=0.005). Players who had taping or bracing on their ankles/feet had lower scores when tested with cleats on firm surface condition (24.6 (SD=1.7) vs 26.9 (SD=2.0), p=0.002). Total M-BESS scores for athletes wearing protective equipment and cleats standing on FieldTurf or a firm surface are around two points lower than M-BESS scores performed on the same athletes under control conditions.

  11. Modified Balance Error Scoring System (M-BESS) test scores in athletes wearing protective equipment and cleats

    PubMed Central

    Azad, Aftab Mohammad; Al Juma, Saad; Bhatti, Junaid Ahmad; Delaney, J Scott

    2016-01-01

    Background Balance testing is an important part of the initial concussion assessment. There is no research on the differences in Modified Balance Error Scoring System (M-BESS) scores when tested in real world as compared to control conditions. Objective To assess the difference in M-BESS scores in athletes wearing their protective equipment and cleats on different surfaces as compared to control conditions. Methods This cross-sectional study examined university North American football and soccer athletes. Three observers independently rated athletes performing the M-BESS test in three different conditions: (1) wearing shorts and T-shirt in bare feet on firm surface (control); (2) wearing athletic equipment with cleats on FieldTurf; and (3) wearing athletic equipment with cleats on firm surface. Mean M-BESS scores were compared between conditions. Results 60 participants were recruited: 39 from football (all males) and 21 from soccer (11 males and 10 females). Average age was 21.1 years (SD=1.8). Mean M-BESS scores were significantly lower (p<0.001) for cleats on FieldTurf (mean=26.3; SD=2.0) and for cleats on firm surface (mean=26.6; SD=2.1) as compared to the control condition (mean=28.4; SD=1.5). Females had lower scores than males for cleats on FieldTurf condition (24.9 (SD=1.9) vs 27.3 (SD=1.6), p=0.005). Players who had taping or bracing on their ankles/feet had lower scores when tested with cleats on firm surface condition (24.6 (SD=1.7) vs 26.9 (SD=2.0), p=0.002). Conclusions Total M-BESS scores for athletes wearing protective equipment and cleats standing on FieldTurf or a firm surface are around two points lower than M-BESS scores performed on the same athletes under control conditions. PMID:27900181

  12. Short-term and long-term effects of GDP on traffic deaths in 18 OECD countries, 1960-2011.

    PubMed

    Dadgar, Iman; Norström, Thor

    2017-02-01

    Research suggests that increases in gross domestic product (GDP) lead to increases in traffic deaths plausibly due to the increased road traffic induced by an expanding economy. However, there also seems to exist a long-term effect of economic growth that is manifested in improved traffic safety and reduced rates of traffic deaths. Previous studies focus on either the short-term, procyclical effect, or the long-term, protective effect. The aim of the present study is to estimate the short-term and long-term effects jointly in order to assess the net impact of GDP on traffic mortality. We extracted traffic death rates for the period 1960-2011 from the WHO Mortality Database for 18 OECD countries. Data on GDP/capita were obtained from the Maddison Project. We performed error correction modelling to estimate the short-term and long-term effects of GDP on the traffic death rates. The estimates from the error correction modelling for the entire study period suggested that a one-unit increase (US$1000) in GDP/capita yields an instantaneous short-term increase in the traffic death rate by 0.58 (p<0.001), and a long-term decrease equal to -1.59 (p<0.001). However, period-specific analyses revealed a structural break implying that the procyclical effect outweighs the protective effect in the period prior to 1976, whereas the reverse is true for the period 1976-2011. An increase in GDP leads to an immediate increase in traffic deaths. However, after the mid-1970s this short-term effect is more than outweighed by a markedly stronger protective long-term effect, whereas the reverse is true for the period before the mid-1970s. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. Very Early Administration of Progesterone Does Not Improve Neuropsychological Outcomes in Subjects with Moderate to Severe Traumatic Brain Injury

    PubMed Central

    Caveney, Angela F.; Hertzberg, Vicki S; Silbergleit, Robert; Yeatts, Sharon D.; Palesch, Yuko Y.; Levin, Harvey S.; Wright, David W.

    2017-01-01

    Abstract A Phase III, double-blind, placebo-controlled trial (ProTECT III) found that administration of progesterone did not reduce mortality or improve functional outcome as measured by the Glasgow Outcome Scale Extended (GOSE) in subjects with moderate to severe traumatic brain injury. We conducted a secondary analysis of neuropsychological outcomes to evaluate whether progesterone is associated with improved recovery of cognitive and motor functioning. ProTECT III was conducted at 49 level I trauma centers in the United States. Adults with moderate to severe TBI were randomized to receive intravenous progesterone or placebo within 4 h of injury for a total of 4 days. At 6 months, subjects underwent evaluation of memory, attention, executive functioning, language, and fine motor coordination/dexterity. Chi-square analysis revealed no significant difference in the proportion of subjects (263/280 progesterone, 283/295 placebo) with Galveston Orientation and Amnesia Test scores ≥75. Analyses of covariance did not reveal significant treatment effects for memory (Buschke immediate recall, p = 0.53; delayed recall, p = 0.94), attention (Trails A speed, p = 0.81 and errors, p = 0.22; Digit Span Forward length, p = 0.66), executive functioning (Trails B speed, p = 0.97 and errors, p = 0.93; Digit Span Backward length, p = 0.60), language (timed phonemic fluency, p = 0.05), and fine motor coordination/dexterity (Grooved Pegboard dominant hand time, p = 0.75 and peg drops, p = 0.59; nondominant hand time, p = 0.74 and peg drops, p = 0.61). Pearson Product Moment Correlations demonstrated significant (p < 0.001) associations between better neuropsychological performance and higher GOSE scores. Similar to the ProTECT III trial's results of the primary outcome, the secondary outcomes do not provide evidence of a neuroprotective effect of progesterone. PMID:26973025

  14. What are we protecting? Fisher behavior and the unintended consequences of spatial closures as a fishery management tool.

    PubMed

    Abbott, Joshua K; Haynie, Alan C

    2012-04-01

    Spatial closures like marine protected areas (MPAs) are prominent tools for ecosystem-based management in fisheries. However, the adaptive behavior of fishermen, the apex predator in the ecosystem, to MPAs may upset the balance of fishing impacts across species. While ecosystem-based management (EBM) emphasizes the protection of all species in the environment, the weakest stock often dominates management attention. We use data before and after the implementation of large spatial closures in a North Pacific trawl fishery to show how closures designed for red king crab protection spurred dramatic increases in Pacific halibut bycatch due to both direct displacement effects and indirect effects from adaptations in fishermen's targeting behavior. We identify aspects of the ecological and economic context of the fishery that contributed to these surprising behaviors, noting that many multispecies fisheries are likely to share these features. Our results highlight the need either to anticipate the behavioral adaptations of fishermen across multiple species in reserve design, a form of implementation error, or to design management systems that are robust to these adaptations. Failure to do so may yield patterns of fishing effort and mortality that undermine the broader objectives of multispecies management and potentially alter ecosystems in profound ways.

  15. Noise producing toys and the efficacy of product standard criteria to protect health and education outcomes.

    PubMed

    McLaren, Stuart J; Page, Wyatt H; Parker, Lou; Rushton, Martin

    2013-12-19

    An evaluation of 28 commercially available toys imported into New Zealand revealed that 21% of these toys do not meet the acoustic criteria in the ISO standard, ISO 8124-1:2009 Safety of Toys, adopted by Australia and New Zealand as AS/NZS ISO 8124.1:2010. While overall the 2010 standard provided a greater level of protection than the earlier 2002 standard, there was one high risk toy category where the 2002 standard provided greater protection. A secondary set of toys from the personal collections of children known to display atypical methods of play with toys, such as those with autism spectrum disorders (ASD), was part of the evaluation. Only one of these toys cleanly passed the 2010 standard, with the remainder failing or showing a marginal-pass. As there is no tolerance level stated in the standards to account for interpretation of data and experimental error, a value of +2 dB was used. The findings of the study indicate that the current standard is inadequate in providing protection against excessive noise exposure. Amendments to the criteria have been recommended that apply to the recently adopted 2013 standard. These include the integration of the new approaches published in the recently amended European standard (EN 71) on safety of toys.

  16. Noise Producing Toys and the Efficacy of Product Standard Criteria to Protect Health and Education Outcomes

    PubMed Central

    McLaren, Stuart J.; Page, Wyatt H.; Parker, Lou; Rushton, Martin

    2013-01-01

    An evaluation of 28 commercially available toys imported into New Zealand revealed that 21% of these toys do not meet the acoustic criteria in the ISO standard, ISO 8124-1:2009 Safety of Toys, adopted by Australia and New Zealand as AS/NZS ISO 8124.1:2010. While overall the 2010 standard provided a greater level of protection than the earlier 2002 standard, there was one high risk toy category where the 2002 standard provided greater protection. A secondary set of toys from the personal collections of children known to display atypical methods of play with toys, such as those with autism spectrum disorders (ASD), was part of the evaluation. Only one of these toys cleanly passed the 2010 standard, with the remainder failing or showing a marginal-pass. As there is no tolerance level stated in the standards to account for interpretation of data and experimental error, a value of +2 dB was used. The findings of the study indicate that the current standard is inadequate in providing protection against excessive noise exposure. Amendments to the criteria have been recommended that apply to the recently adopted 2013 standard. These include the integration of the new approaches published in the recently amended European standard (EN 71) on safety of toys. PMID:24452254

  17. Re-designing the PhEDEx Security Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, C.-H.; Wildish, T.; Zhang, X.

    2014-01-01

    PhEDEx, the data-placement tool used by the CMS experiment at the LHC, was conceived in a more trusting time. The security model provided a safe environment for site agents and operators, but offerred little more protection than that. Data was not sufficiently protected against loss caused by operator error or software bugs or by deliberate manipulation of the database. Operators were given high levels of access to the database, beyond what was actually needed to accomplish their tasks. This exposed them to the risk of suspicion should an incident occur. Multiple implementations of the security model led to difficulties maintainingmore » code, which can lead to degredation of security over time. In order to meet the simultaneous goals of protecting CMS data, protecting the operators from undue exposure to risk, increasing monitoring capabilities and improving maintainability of the security model, the PhEDEx security model was redesigned and re-implemented. Security was moved from the application layer into the database itself, fine-grained access roles were established, and tools and procedures created to control the evolution of the security model over time. In this paper we describe this work, we describe the deployment of the new security model, and we show how these enhancements improve security on several fronts simultaneously.« less

  18. Continuing dental education in radiation protection: monitoring the outcomes.

    PubMed

    Absi, Eg; Drage, Na; Thomas, Hs; Newcombe, Rg; Nash, Es

    2009-03-01

    To evaluate an evolving radiation protection dental postgraduate course run in Wales between 2003 and 2007. We compared three standardized course series. Course content was enhanced in 2006 to target areas of weakness. In 2007, a single best answer multiple choice questionnaire instrument superseded a true/false format. Practitioners' performance was studied pre- and immediately post-training. 900 participants completed identical pre- and post-course validated multiple choice questionnaires. 809 (90%) paired morning-afternoon records, including those of 52 dental care professionals (DCPs), were analysed. Mean (standard error) pre- and post-course percentage scores for the three courses were 33.8 (0.9), 35.4 (1.4), 34.6 (1.0) and 63.6 (0.9), 59.0 (1.4), 69.5 (0.9). Pre-training, only 2.4%, 3.1% and 4.9% of participants achieved the pass mark compared to 57.7%, 48.4% and 65.9% post-training, indicating a rather greater pass rate and gain in the most recent series than earlier ones. In recent series, older more experienced candidates scored slightly higher; however, their gain from pre- to post-training was slightly less. Baseline levels of radiation protection knowledge remained very low but attending an approved course improved this considerably. Targeting areas of weaknesses produced higher scores. Current radiation protection courses may not be optimal for DCPs.

  19. Re-designing the PhEDEx Security Model

    NASA Astrophysics Data System (ADS)

    C-H, Huang; Wildish, T.; X, Zhang

    2014-06-01

    PhEDEx, the data-placement tool used by the CMS experiment at the LHC, was conceived in a more trusting time. The security model provided a safe environment for site agents and operators, but offerred little more protection than that. Data was not sufficiently protected against loss caused by operator error or software bugs or by deliberate manipulation of the database. Operators were given high levels of access to the database, beyond what was actually needed to accomplish their tasks. This exposed them to the risk of suspicion should an incident occur. Multiple implementations of the security model led to difficulties maintaining code, which can lead to degredation of security over time. In order to meet the simultaneous goals of protecting CMS data, protecting the operators from undue exposure to risk, increasing monitoring capabilities and improving maintainability of the security model, the PhEDEx security model was redesigned and re-implemented. Security was moved from the application layer into the database itself, fine-grained access roles were established, and tools and procedures created to control the evolution of the security model over time. In this paper we describe this work, we describe the deployment of the new security model, and we show how these enhancements improve security on several fronts simultaneously.

  20. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  1. Automation of workplace lifting hazard assessment for musculoskeletal injury prevention.

    PubMed

    Spector, June T; Lieblich, Max; Bao, Stephen; McQuade, Kevin; Hughes, Margaret

    2014-01-01

    Existing methods for practically evaluating musculoskeletal exposures such as posture and repetition in workplace settings have limitations. We aimed to automate the estimation of parameters in the revised United States National Institute for Occupational Safety and Health (NIOSH) lifting equation, a standard manual observational tool used to evaluate back injury risk related to lifting in workplace settings, using depth camera (Microsoft Kinect) and skeleton algorithm technology. A large dataset (approximately 22,000 frames, derived from six subjects) of simultaneous lifting and other motions recorded in a laboratory setting using the Kinect (Microsoft Corporation, Redmond, Washington, United States) and a standard optical motion capture system (Qualysis, Qualysis Motion Capture Systems, Qualysis AB, Sweden) was assembled. Error-correction regression models were developed to improve the accuracy of NIOSH lifting equation parameters estimated from the Kinect skeleton. Kinect-Qualysis errors were modelled using gradient boosted regression trees with a Huber loss function. Models were trained on data from all but one subject and tested on the excluded subject. Finally, models were tested on three lifting trials performed by subjects not involved in the generation of the model-building dataset. Error-correction appears to produce estimates for NIOSH lifting equation parameters that are more accurate than those derived from the Microsoft Kinect algorithm alone. Our error-correction models substantially decreased the variance of parameter errors. In general, the Kinect underestimated parameters, and modelling reduced this bias, particularly for more biased estimates. Use of the raw Kinect skeleton model tended to result in falsely high safe recommended weight limits of loads, whereas error-corrected models gave more conservative, protective estimates. Our results suggest that it may be possible to produce reasonable estimates of posture and temporal elements of tasks such as task frequency in an automated fashion, although these findings should be confirmed in a larger study. Further work is needed to incorporate force assessments and address workplace feasibility challenges. We anticipate that this approach could ultimately be used to perform large-scale musculoskeletal exposure assessment not only for research but also to provide real-time feedback to workers and employers during work method improvement activities and employee training.

  2. Flexible and inflexible response components: a Stroop study with typewritten output.

    PubMed

    Damian, Markus F; Freeman, Norman H

    2008-05-01

    Two experiments were directed at investigating the relationship between response selection and execution in typewriting, and specifically the extent to which concurrent processing takes place. In a Stroop paradigm adapted from [Logan, G. D., & Zbrodoff, N. J. (1998). Stroop-type interference: Congruity effects in colour naming with typewritten responses. Journal of Experimental Psychology: Human Perception and Performance, 24, 978-992], participants typed the names of colour patches with incongruent, congruent, or neutral distractors presented at various stimulus-onset asynchronies. Experiment 1 showed Stroop interference and facilitation for initial keystroke latencies and errors, contrasting with response durations (a measure of response execution) being unaffected by Stroop manipulation. Experiment 2 showed that all three measures were responsive to time pressure; again, Stroop effects were confined to latencies and errors only. The observation that response duration is both flexible under time pressure and protected from response competition, may imply either that response execution is structurally segregated from earlier processing stages, or that encapsulation develops during the acquisition of typing skills.

  3. Blind topological measurement-based quantum computation

    PubMed Central

    Morimae, Tomoyuki; Fujii, Keisuke

    2012-01-01

    Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf–Harrington–Goyal scheme. The error threshold of our scheme is 4.3×10−3, which is comparable to that (7.5×10−3) of non-blind topological quantum computation. As the error per gate of the order 10−3 was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach. PMID:22948818

  4. Con Edison power failure of July 13 and 14, 1977. Final staff report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1978-06-01

    On July 13, 1977 the entire electric load of the Con Edison system was lost, plunging New York City and Westchester County into darkness. The collapse resulted from a combination of natural events, equipment malfunctions, questionable system-design features, and operating errors. An attempt is made in this report to answer the following: what were the specific causes of the failure; if equipment malfunctions and operator errors contributed, could they have been prevented; to what extent was Con Edison prepared to handle such an emergency; and did Con Edison plan prudently reserve generation, for reserve transmission capability, for automatic equipment tomore » protect its system, and for proper operator response to a critical situation. Following the introductory and summary section, additional sections include: the Consolidated Edison system; prevention of bulk power-supply interruptions; the sequence of failure and restoration; analysis of the July 1977 power failure; restoration sequence and equipment damage assessment; and other investigations of the blackout. (MCW)« less

  5. Current good manufacturing practice in manufacturing, processing, packing, or holding of drugs; revision of certain labeling controls. Final rule.

    PubMed

    2012-03-20

    The Food and Drug Administration (FDA) is amending the packaging and labeling control provisions of the current good manufacturing practice (CGMP) regulations for human and veterinary drug products by limiting the application of special control procedures for the use of cut labeling to immediate container labels, individual unit cartons, or multiunit cartons containing immediate containers that are not packaged in individual unit cartons. FDA is also permitting the use of any automated technique, including differentiation by labeling size and shape, that physically prevents incorrect labeling from being processed by labeling and packaging equipment when cut labeling is used. This action is intended to protect consumers from labeling errors more likely to cause adverse health consequences, while eliminating the regulatory burden of applying the rule to labeling unlikely to reach or adversely affect consumers. This action is also intended to permit manufacturers to use a broader range of error prevention and labeling control techniques than permitted by current CGMPs.

  6. Rotations of a logical qubit using the quantum Zeno effect extended to a manifold

    NASA Astrophysics Data System (ADS)

    Touzard, S.; Grimm, A.; Leghtas, Z.; Mundhada, S. O.; Reinhold, P.; Heeres, R.; Axline, C.; Reagor, M.; Chou, K.; Blumoff, J.; Sliwa, K. M.; Shankar, S.; Frunzio, L.; Schoelkopf, R. J.; Mirrahimi, M.; Devoret, M. H.

    Encoding Quantum Information in the large Hilbert space of a harmonic oscillator has proven to have advantages over encoding in a register of physical qubits, but has also provided new challenges. While recent experiments have demonstrated quantum error correction using such an encoding based on superpositions of coherent states, these codes are still susceptible to non-corrected errors and lack controllability: compared to physical qubits it is hard to make arbitrary states and to perform operations on them. Our approach is to engineer the dynamics and the dissipation of a microwave cavity to implement a continuous dissipative measurement yielding two degenerate outcomes. This extends the quantum Zeno effect to a manifold, which in our case is spanned by two coherent states of opposite phases. In this second talk we present the result and analysis of an experiment that performs rotations on a logical qubit encoded in this protected manifold. Work supported by: ARO, ONR, AFOSR and YINQE.

  7. Rotations of a logical qubit using the quantum Zeno effect extended to a manifold - Part 1

    NASA Astrophysics Data System (ADS)

    Grimm, A.; Touzard, S.; Leghtas, Z.; Mundhada, S. O.; Reinhold, P.; Heeres, R.; Axline, C.; Reagor, M.; Chou, K.; Blumoff, J.; Sliwa, K. M.; Shankar, S.; Frunzio, L.; Schoelkopf, R. J.; Mirrahimi, M.; Devoret, M. H.

    Encoding Quantum Information in the large Hilbert space of a harmonic oscillator has proven to have advantages over encoding in a register of physical qubits, but has also provided new challenges. While recent experiments have demonstrated quantum error correction using such an encoding based on superpositions of coherent states, these codes are still susceptible to non-corrected errors and lack controllability: compared to physical qubits it is hard to make arbitrary states and to perform operations on them. Our approach is to engineer the dynamics and the dissipation of a microwave cavity to implement a continuous dissipative measurement yielding two degenerate outcomes. This extends the quantum Zeno effect to a manifold, which in our case is spanned by two coherent states of opposite phases. In this first talk we present the concept and architecture of an experiment that performs rotations on a logical qubit encoded in this protected manifold. Work supported by: ARO, ONR, AFOSR and YINQE.

  8. Patient safety in external beam radiotherapy, results of the ACCIRAD project: Recommendations for radiotherapy institutions and national authorities on assessing risks and analysing adverse error-events and near misses.

    PubMed

    Malicki, Julian; Bly, Ritva; Bulot, Mireille; Godet, Jean-Luc; Jahnen, Andreas; Krengli, Marco; Maingon, Philippe; Prieto Martin, Carlos; Skrobala, Agnieszka; Valero, Marc; Jarvinen, Hannu

    2018-05-02

    The ACCIRAD project, commissioned by the European Commission (EC) to develop guidelines for risk analysis of accidental and unintended exposures in external beam radiotherapy (EBRT), was completed in the year 2014. In 2015, the "General guidelines on risk management in external beam radiotherapy" were published as EC report Radiation Protection (RP)-181. The present document is the third and final report of the findings from the ACCIRAD project. The main aim of this paper is to describe the key features of the risk management process and to provide general guidelines for radiotherapy departments and national authorities on risk assessment and analysis of adverse error-events and near misses. The recommendations provided here and in EC report RP-181 are aimed at promoting the harmonisation of risk management systems across Europe, improving patient safety, and enabling more reliable inter-country comparisons. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. A Secure and Robust Object-Based Video Authentication System

    NASA Astrophysics Data System (ADS)

    He, Dajun; Sun, Qibin; Tian, Qi

    2004-12-01

    An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT) coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI).

  10. 3DHZETRN: Inhomogeneous Geometry Issues

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.

    2017-01-01

    Historical methods for assessing radiation exposure inside complicated geometries for space applications were limited by computational constraints and lack of knowledge associated with nuclear processes occurring over a broad range of particles and energies. Various methods were developed and utilized to simplify geometric representations and enable coupling with simplified but efficient particle transport codes. Recent transport code development efforts, leading to 3DHZETRN, now enable such approximate methods to be carefully assessed to determine if past exposure analyses and validation efforts based on those approximate methods need to be revisited. In this work, historical methods of representing inhomogeneous spacecraft geometry for radiation protection analysis are first reviewed. Two inhomogeneous geometry cases, previously studied with 3DHZETRN and Monte Carlo codes, are considered with various levels of geometric approximation. Fluence, dose, and dose equivalent values are computed in all cases and compared. It is found that although these historical geometry approximations can induce large errors in neutron fluences up to 100 MeV, errors on dose and dose equivalent are modest (<10%) for the cases studied here.

  11. Checking Questionable Entry of Personally Identifiable Information Encrypted by One-Way Hash Transformation

    PubMed Central

    Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David

    2017-01-01

    Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. Conclusions The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. PMID:28213343

  12. Checking Questionable Entry of Personally Identifiable Information Encrypted by One-Way Hash Transformation.

    PubMed

    Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David; Yang, Rong

    2017-02-17

    As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. ©Xianlai Chen, Yang C Fann, Matthew McAuliffe, David Vismer, Rong Yang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 17.02.2017.

  13. Activation and Protection of Dendritic Cells in the Prostate Cancer Environment

    DTIC Science & Technology

    2010-10-01

    median survival time (in days) ± standard error of the mean (SEM). Skin graft survived for 11.0±0.7 days in control group and 15.8±1.1 days in the...potentiates allogeneic skin graft rejection and induces syngeneic graft rejection. Transplantation. 1998;65:1436-1446. 6. Jemal A, Siegel R, Xu J, Ward E...injections of ETA receptor inhibitor BQ-123 (for 10 days). Skin graft is soft and viable. 22 P.I.: Georgi Guruli; Award # W81XWH-05-1-0181

  14. Mathematical analysis of deception.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, Deanna Tamae Koike; Durgin, Nancy Ann

    This report describes the results of a three year research project about the use of deception in information protection. The work involved a collaboration between Sandia employees and students in the Center for Cyber Defenders (CCD) and at the University of California at Davis. This report includes a review of the history of deception, a discussion of some cognitive issues, an overview of previous work in deception, the results of experiments on the effects of deception on an attacker, and a mathematical model of error types associated with deception in computer systems.

  15. Solar concentrator advanced development project

    NASA Technical Reports Server (NTRS)

    Corrigan, Robert D.; Ehresman, Derik T.

    1987-01-01

    A solar dynamic concentrator design developed for use with a solar-thermodynamic power generation module intended for the Space Station is considered. The truss hexagonal panel reflector uses a modular design approach and is flexible in attainable flux profiles and assembly techniques. Preliminary structural, thermal, and optical analysis results are discussed. Accuracy of the surface reflectors should be within 5 mrad rms slope error, resulting in the need for close fabrication tolerances. Significant fabrication issues to be addressed include the facet reflective and protective coating processes and the surface specularity requirements.

  16. Television animation store: Recording pictures on a parallel transfer magnetic disc

    NASA Astrophysics Data System (ADS)

    Durey, A. J.

    1984-12-01

    The recording and replaying of digital video signals using a computer-type magnetic disc-drive as part of an electronic rostrum camera animation system is described. The system was developed to enable picture sequences to be generated directly as television signals, instead of using cine film. The characteristics of the disc-drive are described together with data processing, error protection and signal synchronization systems, which enable digital television YUV component signals, sampled at 12 MHz, 4 MHz and 4 MHz respectively, to be recorded and replayed in real time.

  17. Design of joint source/channel coders

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The need to transmit large amounts of data over a band limited channel has led to the development of various data compression schemes. Many of these schemes function by attempting to remove redundancy from the data stream. An unwanted side effect of this approach is to make the information transfer process more vulnerable to channel noise. Efforts at protecting against errors involve the reinsertion of redundancy and an increase in bandwidth requirements. The papers presented within this document attempt to deal with these problems from a number of different approaches.

  18. On the psychology of confessions: does innocence put innocents at risk?

    PubMed

    Kassin, Saul M

    2005-04-01

    The Central Park jogger case and other recent exonerations highlight the problem of wrongful convictions, 15% to 25% of which have contained confessions in evidence. Recent research suggests that actual innocence does not protect people across a sequence of pivotal decisions: (a) In preinterrogation interviews, investigators commit false-positive errors, presuming innocent suspects guilty; (b) naively believing in the transparency of their innocence, innocent suspects waive their rights; (c) despite or because of their denials, innocent suspects elicit highly confrontational interrogations; (d) certain commonly used techniques lead suspects to confess to crimes they did not commit; and (e) police and others cannot distinguish between uncorroborated true and false confessions. It appears that innocence puts innocents at risk, that consideration should be given to reforming current practices, and that a policy of videotaping interrogations is a necessary means of protection. 2005 APA, all rights reserved

  19. Study of Uncertainties of Predicting Space Shuttle Thermal Environment. [impact of heating rate prediction errors on weight of thermal protection system

    NASA Technical Reports Server (NTRS)

    Fehrman, A. L.; Masek, R. V.

    1972-01-01

    Quantitative estimates of the uncertainty in predicting aerodynamic heating rates for a fully reusable space shuttle system are developed and the impact of these uncertainties on Thermal Protection System (TPS) weight are discussed. The study approach consisted of statistical evaluations of the scatter of heating data on shuttle configurations about state-of-the-art heating prediction methods to define the uncertainty in these heating predictions. The uncertainties were then applied as heating rate increments to the nominal predicted heating rate to define the uncertainty in TPS weight. Separate evaluations were made for the booster and orbiter, for trajectories which included boost through reentry and touchdown. For purposes of analysis, the vehicle configuration is divided into areas in which a given prediction method is expected to apply, and separate uncertainty factors and corresponding uncertainty in TPS weight derived for each area.

  20. Decision making in child protective services: a risky business?

    PubMed

    Camasso, Michael J; Jagannathan, Radha

    2013-09-01

    Child Protective Services (CPS) in the United States has received a torrent of criticism from politicians, the media, child advocate groups, and the general public for a perceived propensity to make decisions that are detrimental to children and families. This perception has resulted in numerous lawsuits and court takeovers of CPS in 35 states, and calls for profound restructuring in other states. A widely prescribed remedy for decision errors and faulty judgments is an improvement of risk assessment strategies that enhance hazard evaluation through an improved understanding of threat potentials and exposure likelihoods. We examine the reliability and validity problems that continue to plague current CPS risk assessment and discuss actions that can be taken in the field, including the use of receiver operating characteristic (ROC) curve technology to improve the predictive validity of risk assessment strategies. © 2012 Society for Risk Analysis.

  1. Optimization of the Switch Mechanism in a Circuit Breaker Using MBD Based Simulation

    PubMed Central

    Jang, Jin-Seok; Yoon, Chang-Gyu; Ryu, Chi-Young; Kim, Hyun-Woo; Bae, Byung-Tae; Yoo, Wan-Suk

    2015-01-01

    A circuit breaker is widely used to protect electric power system from fault currents or system errors; in particular, the opening mechanism in a circuit breaker is important to protect current overflow in the electric system. In this paper, multibody dynamic model of a circuit breaker including switch mechanism was developed including the electromagnetic actuator system. Since the opening mechanism operates sequentially, optimization of the switch mechanism was carried out to improve the current breaking time. In the optimization process, design parameters were selected from length and shape of each latch, which changes pivot points of bearings to shorten the breaking time. To validate optimization results, computational results were compared to physical tests with a high speed camera. Opening time of the optimized mechanism was decreased by 2.3 ms, which was proved by experiments. Switch mechanism design process can be improved including contact-latch system by using this process. PMID:25918740

  2. In modern linacs monitor units should be defined in water at 10 cm depth rather than at dmax.

    PubMed

    Van den Heuvel, Frank; Wu, Qiuwen; Cai, Jing

    2018-05-28

    Thanks to the widely adopted guidelines such as AAPM TG-51 1 and IAEA TRS-398 2 , linac calibration has become more consistent and accurate around the globe than previously. Modern linac photon beams are often calibrated in water at 10 cm depth, and configured such that 1 monitor unit (MU) corresponds to 1 cGy at the depth of maximum dose, (d max) . However, such configuration is not without limitations. Some think it is unnecessarily complex and prone to errors, and believe that defining MU at 10 cm is more appropriate. Others think that change of MU definition can cause confusion and possibly serious consequences without any real benefit. This is the premise debated in this month's Point/Counterpoint. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. Hypoxia as a therapy for mitochondrial disease.

    PubMed

    Jain, Isha H; Zazzeron, Luca; Goli, Rahul; Alexa, Kristen; Schatzman-Bone, Stephanie; Dhillon, Harveen; Goldberger, Olga; Peng, Jun; Shalem, Ophir; Sanjana, Neville E; Zhang, Feng; Goessling, Wolfram; Zapol, Warren M; Mootha, Vamsi K

    2016-04-01

    Defects in the mitochondrial respiratory chain (RC) underlie a spectrum of human conditions, ranging from devastating inborn errors of metabolism to aging. We performed a genome-wide Cas9-mediated screen to identify factors that are protective during RC inhibition. Our results highlight the hypoxia response, an endogenous program evolved to adapt to limited oxygen availability. Genetic or small-molecule activation of the hypoxia response is protective against mitochondrial toxicity in cultured cells and zebrafish models. Chronic hypoxia leads to a marked improvement in survival, body weight, body temperature, behavior, neuropathology, and disease biomarkers in a genetic mouse model of Leigh syndrome, the most common pediatric manifestation of mitochondrial disease. Further preclinical studies are required to assess whether hypoxic exposure can be developed into a safe and effective treatment for human diseases associated with mitochondrial dysfunction. Copyright © 2016, American Association for the Advancement of Science.

  4. H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints

    NASA Astrophysics Data System (ADS)

    Ghandi, M. M.; Barmada, B.; Jones, E. V.; Ghanbari, M.

    2006-12-01

    This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC) or hierarchical quadrature amplitude modulation (HQAM) can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.

  5. Faster and more accurate transport procedures for HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.

    2010-12-01

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  6. Comprehensive replication of the relationship between myopia-related genes and refractive errors in a large Japanese cohort.

    PubMed

    Yoshikawa, Munemitsu; Yamashiro, Kenji; Miyake, Masahiro; Oishi, Maho; Akagi-Kurashige, Yumiko; Kumagai, Kyoko; Nakata, Isao; Nakanishi, Hideo; Oishi, Akio; Gotoh, Norimoto; Yamada, Ryo; Matsuda, Fumihiko; Yoshimura, Nagahisa

    2014-10-21

    We investigated the association between refractive error in a Japanese population and myopia-related genes identified in two recent large-scale genome-wide association studies. Single-nucleotide polymorphisms (SNPs) in 51 genes that were reported by the Consortium for Refractive Error and Myopia and/or the 23andMe database were genotyped in 3712 healthy Japanese volunteers from the Nagahama Study using HumanHap610K Quad, HumanOmni2.5M, and/or HumanExome Arrays. To evaluate the association between refractive error and recently identified myopia-related genes, we used three approaches to perform quantitative trait locus analyses of mean refractive error in both eyes of the participants: per-SNP, gene-based top-SNP, and gene-based all-SNP analyses. Association plots of successfully replicated genes also were investigated. In our per-SNP analysis, eight myopia gene associations were replicated successfully: GJD2, RASGRF1, BICC1, KCNQ5, CD55, CYP26A1, LRRC4C, and B4GALNT2.Seven additional gene associations were replicated in our gene-based analyses: GRIA4, BMP2, QKI, BMP4, SFRP1, SH3GL2, and EHBP1L1. The signal strength of the reported SNPs and their tagging SNPs increased after considering different linkage disequilibrium patterns across ethnicities. Although two previous studies suggested strong associations between PRSS56, LAMA2, TOX, and RDH5 and myopia, we could not replicate these results. Our results confirmed the significance of the myopia-related genes reported previously and suggested that gene-based replication analyses are more effective than per-SNP analyses. Our comparison with two previous studies suggested that BMP3 SNPs cause myopia primarily in Caucasian populations, while they may exhibit protective effects in Asian populations. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  7. Faster and more accurate transport procedures for HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less

  8. Estimation of regression laws for ground motion parameters using as case of study the Amatrice earthquake

    NASA Astrophysics Data System (ADS)

    Tiberi, Lara; Costa, Giovanni

    2017-04-01

    The possibility to directly associate the damages to the ground motion parameters is always a great challenge, in particular for civil protections. Indeed a ground motion parameter, estimated in near real time that can express the damages occurred after an earthquake, is fundamental to arrange the first assistance after an event. The aim of this work is to contribute to the estimation of the ground motion parameter that better describes the observed intensity, immediately after an event. This can be done calculating for each ground motion parameter estimated in a near real time mode a regression law which correlates the above-mentioned parameter to the observed macro-seismic intensity. This estimation is done collecting high quality accelerometric data in near field, filtering them at different frequency steps. The regression laws are calculated using two different techniques: the non linear least-squares (NLLS) Marquardt-Levenberg algorithm and the orthogonal distance methodology (ODR). The limits of the first methodology are the needed of initial values for the parameters a and b (set 1.0 in this study), and the constraint that the independent variable must be known with greater accuracy than the dependent variable. While the second algorithm is based on the estimation of the errors perpendicular to the line, rather than just vertically. The vertical errors are just the errors in the 'y' direction, so only for the dependent variable whereas the perpendicular errors take into account errors for both the variables, the dependent and the independent. This makes possible also to directly invert the relation, so the a and b values can be used also to express the gmps as function of I. For each law the standard deviation and R2 value are estimated in order to test the quality and the reliability of the found relation. The Amatrice earthquake of 24th August of 2016 is used as case of study to test the goodness of the calculated regression laws.

  9. Seamless geoids across coastal zones - a comparison of satellite-derived gravity to airborne gravity across the seven continents

    NASA Astrophysics Data System (ADS)

    Forsberg, R.; Olesen, A. V.; Barnes, D.; Ingalls, S. E.; Minter, C. F.; Presicci, M. R.

    2017-12-01

    An accurate coastal geoid model is important for determination of near-shore ocean dynamic topography and currents, as well as for land GPS surveys and global geopotential models. Since many coastal regions across the globe are regions of intense development and coastal protection projects, precise geoid models at cm-level accuracy are essential. The only way to secure cm-geoid accuracies across coastal regions is to acquire more marine gravity data; here airborne gravity is the obvious method of choice due to the uniform accuracy, and the ability to provide a seamless geoid accuracy across the coastline. Current practice for gravity and geoid models, such as EGM2008 and many national projects, is to complement land gravity data with satellite radar altimetry at sea, a procedure which can give large errors in regions close to the coast. To quantify the coastal errors in satellite gravity, we compare results of a large set of recent airborne gravity surveys, acquired across a range of coastal zones globally from polar to equatorial regions, and quantify the errors as a function of distance from the coast line for a number of different global altimetry gravity solutions. We find that accuracy in satellite altimetry solutions depend very much on the availability of gravity data along the coast-near land regions in the underlying reference fields (e.g., EGM2008), with satellite gravity accuracy in the near-shore zone ranging from anywhere between 5 to 20 mGal r.m.s., with occasional large outliers; we also show how these errors may typically propagate into coastal geoid errors of 5-10 cm r.m.s. or more. This highlight the need for airborne (land) gravity surveys to be extended at least 20-30 km offshore, especially for regions of insufficient marine gravity coverage; we give examples of a few such recent surveys and associated marine geoid impacts.

  10. Dissertation Defense Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system. Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.

  11. Dissertation Defense: Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System spacecraft system.Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.

  12. Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.

    2013-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This proposal describes an approach to validate the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft. The research described here is absolutely cutting edge. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional"validation by test only'' mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computationaf Fluid Dynamics can be used to veritY these requirements; however, the model must be validated by test data. The proposed research project includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT and OPEN FOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid . . . Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system. Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. To date, the author is the only person to look at the uncertainty in the entire computational domain. For the flow regime being analyzed (turbulent, threedimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.

  13. A real-time heat strain risk classifier using heart rate and skin temperature.

    PubMed

    Buller, Mark J; Latzka, William A; Yokota, Miyo; Tharion, William J; Moran, Daniel S

    2008-12-01

    Heat injury is a real concern to workers engaged in physically demanding tasks in high heat strain environments. Several real-time physiological monitoring systems exist that can provide indices of heat strain, e.g. physiological strain index (PSI), and provide alerts to medical personnel. However, these systems depend on core temperature measurement using expensive, ingestible thermometer pills. Seeking a better solution, we suggest the use of a model which can identify the probability that individuals are 'at risk' from heat injury using non-invasive measures. The intent is for the system to identify individuals who need monitoring more closely or who should apply heat strain mitigation strategies. We generated a model that can identify 'at risk' (PSI 7.5) workers from measures of heart rate and chest skin temperature. The model was built using data from six previously published exercise studies in which some subjects wore chemical protective equipment. The model has an overall classification error rate of 10% with one false negative error (2.7%), and outperforms an earlier model and a least squares regression model with classification errors of 21% and 14%, respectively. Additionally, the model allows the classification criteria to be adjusted based on the task and acceptable level of risk. We conclude that the model could be a valuable part of a multi-faceted heat strain management system.

  14. Technical Basis for Evaluating Software-Related Common-Cause Failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muhlheim, Michael David; Wood, Richard

    2016-04-01

    The instrumentation and control (I&C) system architecture at a nuclear power plant (NPP) incorporates protections against common-cause failures (CCFs) through the use of diversity and defense-in-depth. Even for well-established analog-based I&C system designs, the potential for CCFs of multiple systems (or redundancies within a system) constitutes a credible threat to defeating the defense-in-depth provisions within the I&C system architectures. The integration of digital technologies into the I&C systems provides many advantages compared to the aging analog systems with respect to reliability, maintenance, operability, and cost effectiveness. However, maintaining the diversity and defense-in-depth for both the hardware and software within themore » digital system is challenging. In fact, the introduction of digital technologies may actually increase the potential for CCF vulnerabilities because of the introduction of undetected systematic faults. These systematic faults are defined as a “design fault located in a software component” and at a high level, are predominately the result of (1) errors in the requirement specification, (2) inadequate provisions to account for design limits (e.g., environmental stress), or (3) technical faults incorporated in the internal system (or architectural) design or implementation. Other technology-neutral CCF concerns include hardware design errors, equipment qualification deficiencies, installation or maintenance errors, instrument loop scaling and setpoint mistakes.« less

  15. Digital hum filtering

    USGS Publications Warehouse

    Knapp, R.W.; Anderson, N.L.

    1994-01-01

    Data may be overprinted by a steady-state cyclical noise (hum). Steady-state indicates that the noise is invariant with time; its attributes, frequency, amplitude, and phase, do not change with time. Hum recorded on seismic data usually is powerline noise and associated higher harmonics; leakage from full-waveform rectified cathodic protection devices that contain the odd higher harmonics of powerline frequencies; or vibrational noise from mechanical devices. The fundamental frequency of powerline hum may be removed during data acquisition with the use of notch filters. Unfortunately, notch filters do not discriminate signal and noise, attenuating both. They also distort adjacent frequencies by phase shifting. Finally, they attenuate only the fundamental mode of the powerline noise; higher harmonics and frequencies other than that of powerlines are not removed. Digital notch filters, applied during processing, have many of the same problems as analog filters applied in the field. The method described here removes hum of a particular frequency. Hum attributes are measured by discrete Fourier analysis, and the hum is canceled from the data by subtraction. Errors are slight and the result of the presence of (random) noise in the window or asynchrony of the hum and data sampling. Error is minimized by increasing window size or by resampling to a finer interval. Errors affect the degree of hum attenuation, not the signal. The residual is steady-state hum of the same frequency. ?? 1994.

  16. Ergonomics in the operating room: protecting the surgeon.

    PubMed

    Rosenblatt, Peter L; McKinney, Jessica; Adams, Sonia R

    2013-01-01

    To review elements of an ergonomic operating room environment and describe common ergonomic errors in surgeon posture during laparoscopic and robotic surgery. Descriptive video based on clinical experience and a review of the literature (Canadian Task Force classification III). Community teaching hospital affiliated with a major teaching hospital. Gynecologic surgeons. Demonstration of surgical ergonomic principles and common errors in surgical ergonomics by a physical therapist and surgeon. The physical nature of surgery necessitates awareness of ergonomic principles. The literature has identified ergonomic awareness to be grossly lacking among practicing surgeons, and video has not been documented as a teaching tool for this population. Taking this into account, we created a video that demonstrates proper positioning of monitors and equipment, and incorrect and correct ergonomic positions during surgery. Also presented are 3 common ergonomic errors in surgeon posture: forward head position, improper shoulder elevation, and pelvic girdle asymmetry. Postural reset and motion strategies are demonstrated to help the surgeon learn techniques to counterbalance the sustained and awkward positions common during surgery that lead to muscle fatigue, pain, and degenerative changes. Correct ergonomics is a learned and practiced behavior. We believe that video is a useful way to facilitate improvement in ergonomic behaviors. We suggest that consideration of operating room setup, proper posture, and practice of postural resets are necessary components for a longer, healthier, and pain-free surgical career. Copyright © 2013 AAGL. Published by Elsevier Inc. All rights reserved.

  17. ROSE::FTTransform - A Source-to-Source Translation Framework for Exascale Fault-Tolerance Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lidman, J; Quinlan, D; Liao, C

    2012-03-26

    Exascale computing systems will require sufficient resilience to tolerate numerous types of hardware faults while still assuring correct program execution. Such extreme-scale machines are expected to be dominated by processors driven at lower voltages (near the minimum 0.5 volts for current transistors). At these voltage levels, the rate of transient errors increases dramatically due to the sensitivity to transient and geographically localized voltage drops on parts of the processor chip. To achieve power efficiency, these processors are likely to be streamlined and minimal, and thus they cannot be expected to handle transient errors entirely in hardware. Here we present anmore » open, compiler-based framework to automate the armoring of High Performance Computing (HPC) software to protect it from these types of transient processor errors. We develop an open infrastructure to support research work in this area, and we define tools that, in the future, may provide more complete automated and/or semi-automated solutions to support software resiliency on future exascale architectures. Results demonstrate that our approach is feasible, pragmatic in how it can be separated from the software development process, and reasonably efficient (0% to 30% overhead for the Jacobi iteration on common hardware; and 20%, 40%, 26%, and 2% overhead for a randomly selected subset of benchmarks from the Livermore Loops [1]).« less

  18. Assessment of Ionospheric Gradient Impacts on Ground-Based Augmentation System (GBAS) Data in Guangdong Province, China

    PubMed Central

    Wang, Zhipeng; Wang, Shujing; Zhu, Yanbo; Xin, Pumin

    2017-01-01

    Ionospheric delay is one of the largest and most variable sources of error for Ground-Based Augmentation System (GBAS) users because inospheric activity is unpredictable. Under normal conditions, GBAS eliminates ionospheric delays, but during extreme ionospheric storms, GBAS users and GBAS ground facilities may experience different ionospheric delays, leading to considerable differential errors and threatening the safety of users. Therefore, ionospheric monitoring and assessment are important parts of GBAS integrity monitoring. To study the effects of the ionosphere on the GBAS of Guangdong Province, China, GPS data collected from 65 reference stations were processed using the improved “Simple Truth” algorithm. In addition, the ionospheric characteristics of Guangdong Province were calculated and an ionospheric threat model was established. Finally, we evaluated the influence of the standard deviation and maximum ionospheric gradient on GBAS. The results show that, under normal ionospheric conditions, the vertical protection level of GBAS was increased by 0.8 m for the largest over bound σvig (sigma of vertical ionospheric gradient), and in the case of the maximum ionospheric gradient conditions, the differential correction error may reach 5 m. From an airworthiness perspective, when the satellite is at a low elevation, this interference does not cause airworthiness risks, but when the satellite is at a high elevation, this interference can cause airworthiness risks. PMID:29019953

  19. Assessment of Ionospheric Gradient Impacts on Ground-Based Augmentation System (GBAS) Data in Guangdong Province, China.

    PubMed

    Wang, Zhipeng; Wang, Shujing; Zhu, Yanbo; Xin, Pumin

    2017-10-11

    Ionospheric delay is one of the largest and most variable sources of error for Ground-Based Augmentation System (GBAS) users because inospheric activity is unpredictable. Under normal conditions, GBAS eliminates ionospheric delays, but during extreme ionospheric storms, GBAS users and GBAS ground facilities may experience different ionospheric delays, leading to considerable differential errors and threatening the safety of users. Therefore, ionospheric monitoring and assessment are important parts of GBAS integrity monitoring. To study the effects of the ionosphere on the GBAS of Guangdong Province, China, GPS data collected from 65 reference stations were processed using the improved "Simple Truth" algorithm. In addition, the ionospheric characteristics of Guangdong Province were calculated and an ionospheric threat model was established. Finally, we evaluated the influence of the standard deviation and maximum ionospheric gradient on GBAS. The results show that, under normal ionospheric conditions, the vertical protection level of GBAS was increased by 0.8 m for the largest over bound σ v i g (sigma of vertical ionospheric gradient), and in the case of the maximum ionospheric gradient conditions, the differential correction error may reach 5 m. From an airworthiness perspective, when the satellite is at a low elevation, this interference does not cause airworthiness risks, but when the satellite is at a high elevation, this interference can cause airworthiness risks.

  20. Reciprocally-Benefited Secure Transmission for Spectrum Sensing-Based Cognitive Radio Sensor Networks

    PubMed Central

    Wang, Dawei; Ren, Pinyi; Du, Qinghe; Sun, Li; Wang, Yichen

    2016-01-01

    The rapid proliferation of independently-designed and -deployed wireless sensor networks extremely crowds the wireless spectrum and promotes the emergence of cognitive radio sensor networks (CRSN). In CRSN, the sensor node (SN) can make full use of the unutilized licensed spectrum, and the spectrum efficiency is greatly improved. However, inevitable spectrum sensing errors will adversely interfere with the primary transmission, which may result in primary transmission outage. To compensate the adverse effect of spectrum sensing errors, we propose a reciprocally-benefited secure transmission strategy, in which SN’s interference to the eavesdropper is employed to protect the primary confidential messages while the CRSN is also rewarded with a loose spectrum sensing error probability constraint. Specifically, according to the spectrum sensing results and primary users’ activities, there are four system states in this strategy. For each state, we analyze the primary secrecy rate and the SN’s transmission rate by taking into account the spectrum sensing errors. Then, the SN’s transmit power is optimally allocated for each state so that the average transmission rate of CRSN is maximized under the constraint of the primary maximum permitted secrecy outage probability. In addition, the performance tradeoff between the transmission rate of CRSN and the primary secrecy outage probability is investigated. Moreover, we analyze the primary secrecy rate for the asymptotic scenarios and derive the closed-form expression of the SN’s transmission outage probability. Simulation results show that: (1) the performance of the SN’s average throughput in the proposed strategy outperforms the conventional overlay strategy; (2) both the primary network and CRSN benefit from the proposed strategy. PMID:27897988

  1. Acute anxiety and social inference: An experimental manipulation with 7.5% carbon dioxide inhalation

    PubMed Central

    Button, Katherine S; Karwatowska, Lucy; Kounali, Daphne; Munafò, Marcus R; Attwood, Angela S

    2016-01-01

    Background: Positive self-bias is thought to be protective for mental health. We previously found that the degree of positive bias when learning self-referential social evaluation decreases with increasing social anxiety. It is unclear whether this reduction is driven by differences in state or trait anxiety, as both are elevated in social anxiety; therefore, we examined the effects on the state of anxiety induced by the 7.5% carbon dioxide (CO2) inhalation model of generalised anxiety disorder (GAD) on social evaluation learning. Methods: For our study, 48 (24 of female gender) healthy volunteers took two inhalations (medical air and 7.5% CO2, counterbalanced) whilst learning social rules (self-like, self-dislike, other-like and other-dislike) in an instrumental social evaluation learning task. We analysed the outcomes (number of positive responses and errors to criterion) using the random effects Poisson regression. Results: Participants made fewer and more positive responses when breathing 7.5% CO2 in the other-like and other-dislike rules, respectively (gas × condition × rule interaction p = 0.03). Individuals made fewer errors learning self-like than self-dislike, and this positive self-bias was unaffected by CO2. Breathing 7.5% CO2 increased errors, but only in the other-referential rules (gas × condition × rule interaction p = 0.003). Conclusions: Positive self-bias (i.e. fewer errors learning self-like than self-dislike) seemed robust to changes in state anxiety. In contrast, learning other-referential evaluation was impaired as state anxiety increased. This suggested that the previously observed variations in self-bias arise due to trait, rather than state, characteristics. PMID:27380750

  2. Acute anxiety and social inference: An experimental manipulation with 7.5% carbon dioxide inhalation.

    PubMed

    Button, Katherine S; Karwatowska, Lucy; Kounali, Daphne; Munafò, Marcus R; Attwood, Angela S

    2016-10-01

    Positive self-bias is thought to be protective for mental health. We previously found that the degree of positive bias when learning self-referential social evaluation decreases with increasing social anxiety. It is unclear whether this reduction is driven by differences in state or trait anxiety, as both are elevated in social anxiety; therefore, we examined the effects on the state of anxiety induced by the 7.5% carbon dioxide (CO2) inhalation model of generalised anxiety disorder (GAD) on social evaluation learning. For our study, 48 (24 of female gender) healthy volunteers took two inhalations (medical air and 7.5% CO2, counterbalanced) whilst learning social rules (self-like, self-dislike, other-like and other-dislike) in an instrumental social evaluation learning task. We analysed the outcomes (number of positive responses and errors to criterion) using the random effects Poisson regression. Participants made fewer and more positive responses when breathing 7.5% CO2 in the other-like and other-dislike rules, respectively (gas × condition × rule interaction p = 0.03). Individuals made fewer errors learning self-like than self-dislike, and this positive self-bias was unaffected by CO2. Breathing 7.5% CO2 increased errors, but only in the other-referential rules (gas × condition × rule interaction p = 0.003). Positive self-bias (i.e. fewer errors learning self-like than self-dislike) seemed robust to changes in state anxiety. In contrast, learning other-referential evaluation was impaired as state anxiety increased. This suggested that the previously observed variations in self-bias arise due to trait, rather than state, characteristics. © The Author(s) 2016.

  3. Performance and structure of single-mode bosonic codes

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang

    2018-03-01

    The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.

  4. Individual Radiological Protection Monitoring of Utrok Atoll Residents Based on Whole Body Counting of Cesium-137 (137Cs) and Plutonium Bioassay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, T; Kehl, S; Brown, T

    2007-06-08

    This report contains individual radiological protection surveillance data developed during 2006 for adult members of a select group of families living on Utrok Atoll. These Group I volunteers all underwent a whole-body count to determine levels of internally deposited cesium-137 ({sup 137}Cs) and supplied a bioassay sample for analysis of plutonium isotopes. Measurement data were obtained and the results compared with an equivalent set of measurement data for {sup 137}Cs and plutonium isotopes from a second group of adult volunteers (Group II) who were long-term residents of Utrok Atoll. For the purposes of this comparison, Group II volunteers were consideredmore » representative of the general population on Utrok Atoll. The general aim of the study was to determine residual systemic burdens of fallout radionuclides in each volunteer group, develop data in response to addressing some specific concerns about the preferential uptake and potential health consequences of residual fallout radionuclides in Group I volunteers, and generally provide some perspective on the significance of radiation doses delivered to volunteers (and the general Utrok Atoll resident population) in terms of radiological protection standards and health risks. Based on dose estimates from measurements of internally deposited {sup 137}Cs and plutonium isotopes, the data and information developed in this report clearly show that neither volunteer group has acquired levels of internally deposited fallout radionuclides specific to nuclear weapons testing in the Marshall Islands that are likely to have any consequence on human health. Moreover, the dose estimates are well below radiological protection standards as prescribed by U.S. regulators and international agencies, and are very small when compared to doses from natural sources of radiation in the Marshall Islands and the threshold where radiation health effects could be either medically diagnosed in an individual or epidemiologically discerned in a group of people. In general, the results from the whole-body counting measurements of 137Cs are consistent with our knowledge that a key pathway for exposure to residual fallout contamination on Utrok Atoll is low-level chronic uptake of {sup 137}Cs from the consumption of locally grown produce (Robison et al., 1999). The error-weighted, average body burden of {sup 137}Cs measured in Group I and Group II volunteers was 0.31 kBq and 0.62 kBq, respectively. The associated average, annual committed effective dose equivalent (CEDE) delivered to Group I and Group II volunteers from {sup 137}Cs during the year of measurement was 2.1 and 4.0 mrem. For comparative purposes, the annual dose limit for members of the public as recommended by the National Council on Radiation Protection and Measurements (NCRP) and the International Commission on Radiological Protection (ICRP) is 100 mrem. Consequently, specific concerns about elevated levels of {sup 137}Cs uptake and higher risks from radiation exposure to Group I volunteers would be considered unfounded. Moreover, the urinary excretion of plutonium-239 ({sup 239}Pu) from Group I and Group II volunteers is statistically indistinguishable. In this case, the error-weighted, average urinary excretion of {sup 239}Pu from Group I volunteers of 0.10 {mu}Bq per 24-h void with a range between -0.01 and 0.23 {mu}Bq per 24-h void compares with an error-weighted average from Group II volunteers of 0.11 {mu}Bq per 24-h void with a range between -0.20 and 0.47 {mu}Bq per 24-h void. The range in urinary excretion of {sup 239}Pu from Utrok Atoll residents is very similar to that observed for other population groups in the Marshall Islands (Bogen et al., 2006; Hamilton et al., 2006a; 2006b; 2006c, 2007a; 2007b; 2007c) and is generally considered representative of worldwide background.« less

  5. Effect of Genetic Variant in BICC1 on Functional and Structural Brain Changes in Depression

    PubMed Central

    Bermingham, Rachel; Carballedo, Angela; Lisiecka, Danuta; Fagan, Andrew; Morris, Derek; Fahey, Ciara; Donohoe, Gary; Meaney, James; Gill, Michael; Frodl, Thomas

    2012-01-01

    Genes and early-life adversity (ELA) interactively increase the risk of developing major depressive disorder (MDD). A recent genome-wide association study suggests that the minor T-allele of single-nucleotide polymorphisms in the bicaudal C homolog 1 gene (BICC1) has a protective role against MDD. The aims of the study were to investigate whether the minor T-allele of BICC1 is protective against hippocampal structural brain changes, whether it is associated with increased functional brain activity in the emotion regulation system, and how ELA would modify this association. Forty-four patients with MDD and 44 healthy controls were investigated using structural magnetic resonance imaging (MRI) and functional MRI with an emotion inhibition task. Analysis of a single-nucleotide polymorphism in the BICC1-1 (rs999845) gene was performed. Right hippocampal bodies of patients and controls without a history of ELA and who carry the protective T-allele of BICC1 were significantly larger compared with those participants homozygous for the major C-allele of BICC1. However, MDD patients with ELA, who carry the T-allele, had smaller hippocampal head volumes compared with MDD patients without ELA. FMRI showed that patients and controls carrying the protective T-allele of BICC1 activate the emotion regulation system significantly more compared with those participants homozygous for the major C-allele (p<0.05, family wise error corrected). These results are suggestive that the minor T-allele of BICC1 has a protective role against MDD and its known structural and functional brain changes. However, this protective effect seems to be lost in the case of co-occurrence of ELA. PMID:22910460

  6. Finite Energy and Bounded Attacks on Control System Sensor Signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djouadi, Seddik M; Melin, Alexander M; Ferragut, Erik M

    Control system networks are increasingly being connected to enterprise level networks. These connections leave critical industrial controls systems vulnerable to cyber-attacks. Most of the effort in protecting these cyber-physical systems (CPS) has been in securing the networks using information security techniques and protection and reliability concerns at the control system level against random hardware and software failures. However, besides these failures the inability of information security techniques to protect against all intrusions means that the control system must be resilient to various signal attacks for which new analysis and detection methods need to be developed. In this paper, sensor signalmore » attacks are analyzed for observer-based controlled systems. The threat surface for sensor signal attacks is subdivided into denial of service, finite energy, and bounded attacks. In particular, the error signals between states of attack free systems and systems subject to these attacks are quantified. Optimal sensor and actuator signal attacks for the finite and infinite horizon linear quadratic (LQ) control in terms of maximizing the corresponding cost functions are computed. The closed-loop system under optimal signal attacks are provided. Illustrative numerical examples are provided together with an application to a power network with distributed LQ controllers.« less

  7. Portrayal of tanning, clothing fashion and shade use in Australian women's magazines, 1987-2005.

    PubMed

    Dixon, Helen; Dobbinson, Suzanne; Wakefield, Melanie; Jamsen, Kris; McLeod, Kim

    2008-10-01

    To examine modelling of outcomes relevant to sun protection in Australian women's magazines, content analysis was performed on 538 spring and summer issues of popular women's magazines from 1987 to 2005. A total of 4949 full-colour images of Caucasian females were coded for depth of tan, extent of clothing cover, use of shade and setting. Logistic regression using robust standard errors to adjust for clustering on magazine was used to assess the relationship between these outcomes and year, setting and model's physical characteristics. Most models portrayed outdoors did not wear hats (89%) and were not in shade (87%). Between 1987 and 2005, the proportion of models depicted wearing hats decreased and the proportion of models portrayed with moderate to dark tans declined and then later increased. Younger women were more likely to be portrayed with a darker tan and more of their body exposed. Models with more susceptible phenotypes (paler hair and eye colour) were less likely to be depicted with a darker tan. Darker tans and poor sun-protective behaviour were most common among models depicted at beaches/pools. Implicit messages about sun protection in popular Australian women's magazines contradict public health messages concerning skin cancer prevention.

  8. Cognitive Stimulation of Elderly Residents in Social Protection Centers in Cartagena, 2014.

    PubMed

    Melguizo Herrera, Estela; Bertel De La Hoz, Anyel; Paternina Osorio, Diego; Felfle Fuentes, Yurani; Porto Osorio, Leidy

    To determine the effectiveness of a program of cognitive stimulation of the elderly residents in Social Protection Centers in Cartagena, 2014. Quasi-experimental study with pre and post tests in control and experimental groups. A sample of 37 elderly residents in Social Protection Centers participated: 23 in the experimental group and 14 in the control group. A survey and a mental evaluation test (Pfeiffer) were applied. The experimental group participated in 10 sessions of cognitive stimulation. The paired t-test showed statistically significant differences in the Pfeiffer test, pre and post intervention, compared to the experimental group (P=.0005). The unpaired t-test showed statistically significant differences in Pfeiffer test results to the experimental and control groups (P=.0450). The analysis of the main components showed that more interrelated variables were: age, diseases, number of errors and test results; which were grouped around the disease variable, with a negative association. The intervention demonstrated a statistically significant improvement in cognitive functionality of the elderly. Nursing can lead this type of intervention. It should be studied further to strengthen and clarify these results. Copyright © 2016 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  9. Reducing Sun Exposure for Prevention of Skin Cancers: Factorial Invariance and Reliability of the Self-Efficacy Scale for Sun Protection

    PubMed Central

    Babbin, Steven F.; Yin, Hui-Qing; Rossi, Joseph S.; Redding, Colleen A.; Paiva, Andrea L.; Velicer, Wayne F.

    2015-01-01

    The Self-Efficacy Scale for Sun Protection consists of two correlated factors with three items each for Sunscreen Use and Avoidance. This study evaluated two crucial psychometric assumptions, factorial invariance and scale reliability, with a sample of adults (N = 1356) participating in a computer-tailored, population-based intervention study. A measure has factorial invariance when the model is the same across subgroups. Three levels of invariance were tested, from least to most restrictive: (1) Configural Invariance (nonzero factor loadings unconstrained); (2) Pattern Identity Invariance (equal factor loadings); and (3) Strong Factorial Invariance (equal factor loadings and measurement errors). Strong Factorial Invariance was a good fit for the model across seven grouping variables: age, education, ethnicity, gender, race, skin tone, and Stage of Change for Sun Protection. Internal consistency coefficient Alpha and factor rho scale reliability, respectively, were .84 and .86 for Sunscreen Use, .68 and .70 for Avoidance, and .78 and .78 for the global (total) scale. The psychometric evidence demonstrates strong empirical support that the scale is consistent, has internal validity, and can be used to assess population-based adult samples. PMID:26457203

  10. Real-time core body temperature estimation from heart rate for first responders wearing different levels of personal protective equipment.

    PubMed

    Buller, Mark J; Tharion, William J; Duhamel, Cynthia M; Yokota, Miyo

    2015-01-01

    First responders often wear personal protective equipment (PPE) for protection from on-the-job hazards. While PPE ensembles offer individuals protection, they limit one's ability to thermoregulate, and can place the wearer in danger of heat exhaustion and higher cardiac stress. Automatically monitoring thermal-work strain is one means to manage these risks, but measuring core body temperature (Tc) has proved problematic. An algorithm that estimates Tc from sequential measures of heart rate (HR) was compared to the observed Tc from 27 US soldiers participating in three different chemical/biological training events (45-90 min duration) while wearing PPE. Hotter participants (higher Tc) averaged (HRs) of 140 bpm and reached Tc around 39 °C. Overall the algorithm had a small bias (0.02 °C) and root mean square error (0.21 °C). Limits of agreement (LoA ± 0.48 °C) were similar to comparisons of Tc measured by oesophageal and rectal probes. The algorithm shows promise for use in real-time monitoring of encapsulated first responders. An algorithm to estimate core temperature (Tc) from non-invasive measures of HR was validated. Three independent studies (n = 27) compared the estimated Tc to the observed Tc in humans participating in chemical/ biological hazard training. The algorithm’s bias and variance to observed data were similar to that found from comparisons of oesophageal and rectal measurements.

  11. Reactor protection system with automatic self-testing and diagnostic

    DOEpatents

    Gaubatz, Donald C.

    1996-01-01

    A reactor protection system having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically "identical" values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic.

  12. Reactor protection system with automatic self-testing and diagnostic

    DOEpatents

    Gaubatz, D.C.

    1996-12-17

    A reactor protection system is disclosed having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically ``identical`` values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic. 16 figs.

  13. Biennial-Aligned Lunisolar-Forcing of ENSO: Implications for Simplified Climate Models

    NASA Astrophysics Data System (ADS)

    Pukite, P. R.

    2017-12-01

    By solving Laplace's tidal equations along the equatorial Pacific thermocline, assuming a delayed-differential effective gravity forcing due to a combined lunar+solar (lunisolar) stimulus, we are able to precisely match ENSO periodic variations over wide intervals. The underlying pattern is difficult to decode by conventional means such as spectral analysis, which is why it has remained hidden for so long, despite the excellent agreement in the time-domain. What occurs is that a non-linear seasonal modulation with monthly and fortnightly lunar impulses along with a biennially-aligned "see-saw" is enough to cause a physical aliasing and thus multiple folding in the frequency spectrum. So, instead of a conventional spectral tidal decomposition, we opted for a time-domain cross-validating approach to calibrate the amplitude and phasing of the lunisolar cycles. As the lunar forcing consists of three fundamental periods (draconic, anomalistic, synodic), we used the measured Earth's length-of-day (LOD) decomposed and resolved at a monthly time-scale [1] to align the amplitude and phase precisely. Even slight variations from the known values of the long-period tides will degrade the fit, so a high-resolution calibration is possible. Moreover, a narrow training segment from 1880-1920 using NINO34/SOI data is adequate to extrapolate the cycles of the past 100 years (see attached figure). To further understand the biennial impact of a yearly differential-delay, we were able to also decompose using difference equations the historical sea-level-height readings at Sydney harbor to clearly expose the ENSO behavior. Finally, the ENSO lunisolar model was validated by back-extrapolating to Unified ENSO coral proxy (UEP) records dating to 1650. The quasi-biennial oscillation (QBO) behavior of equatorial stratospheric winds derives following a similar pattern to ENSO via the tidal equations, but with an emphasis on draconic forcing. This improvement in ENSO and QBO understanding has implications for vastly simplifying global climate models due to the straightforward application of a well-known and well-calibrated forcing. [1] Na, Sung-Ho, et al. "Characteristics of Perturbations in Recent Length of Day and Polar Motion." Journal of Astronomy and Space Sciences 30 (2013): 33-41.

  14. The Hubble Space Telescope optical systems failure report

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The findings of the Hubble Space Telescope Optical Systems Board of Investigation are reported. The Board was formed to determine the cause of the flaw in the telescope, how it occurred, and why it was not detected before launch. The Board conducted its investigation to include interviews with personnel involved in the fabrication and test of the telescope, review of documentation, and analysis and test of the equipment used in the fabrication of the telescope's mirrors. The investigation proved that the primary mirror was made in the wrong shape (a 0.4-wave rms wavefront error at 632.8 nm). The primary mirror was manufactured by the Perkin-Elmer Corporation (Hughes Danbury Optical Systems, Inc.). The critical optics used as a template in shaping the mirror, the reflective null corrector (RNC), consisted of two small mirrors and a lens. This unit had been preserved by the manufacturer exactly as it was during the manufacture of the mirror. When the Board measured the RNC, the lens was incorrectly spaced from the mirrors. Calculations of the effect of such displacement on the primary mirror show that the measured amount, 1.3 mm, accounts in detail for the amount and character of the observed image blurring. No verification of the reflective null corrector's dimensions was carried out by Perkin-Elmer after the original assembly. There were, however, clear indications of the problem from auxiliary optical tests made at the time. A special optical unit called an inverse null corrector, designed to mimic the reflection from a perfect primary mirror, was built and used to align the apparatus; when so used, it clearly showed the error in the reflective null corrector. A second null corrector was used to measure the vertex radius of the finished primary mirror. It, too, clearly showed the error in the primary mirror. Both indicators of error were discounted at the time as being themselves flawed. The Perkin-Elmer plan for fabricating the primary mirror placed complete reliance on the reflective null corrector as the only test to be used in both manufacturing and verifying the mirror's surface with the required precision. This methodology should have alerted NASA management to the fragility of the process and the possibility of gross error. Such errors had been seen in other telescope programs, yet no independent tests were planned, although some simple tests to protect against major error were considered and rejected. During the critical time period, there was great concern about cost and schedule, which further inhibited consideration of independent tests.

  15. Computing with a single qubit faster than the computation quantum speed limit

    NASA Astrophysics Data System (ADS)

    Sinitsyn, Nikolai A.

    2018-02-01

    The possibility to save and process information in fundamentally indistinguishable states is the quantum mechanical resource that is not encountered in classical computing. I demonstrate that, if energy constraints are imposed, this resource can be used to accelerate information-processing without relying on entanglement or any other type of quantum correlations. In fact, there are computational problems that can be solved much faster, in comparison to currently used classical schemes, by saving intermediate information in nonorthogonal states of just a single qubit. There are also error correction strategies that protect such computations.

  16. CFD Script for Rapid TPS Damage Assessment

    NASA Technical Reports Server (NTRS)

    McCloud, Peter

    2013-01-01

    This grid generation script creates unstructured CFD grids for rapid thermal protection system (TPS) damage aeroheating assessments. The existing manual solution is cumbersome, open to errors, and slow. The invention takes a large-scale geometry grid and its large-scale CFD solution, and creates a unstructured patch grid that models the TPS damage. The flow field boundary condition for the patch grid is then interpolated from the large-scale CFD solution. It speeds up the generation of CFD grids and solutions in the modeling of TPS damages and their aeroheating assessment. This process was successfully utilized during STS-134.

  17. The aging-disease false dichotomy: understanding senescence as pathology

    PubMed Central

    Gems, David

    2015-01-01

    From a biological perspective aging (senescence) appears to be a form of complex disease syndrome, though this is not the traditional view. This essay aims to foster a realistic understanding of aging by scrutinizing ideas old and new. The conceptual division between aging-related diseases and an underlying, non-pathological aging process underpins various erroneous traditional ideas about aging. Among biogerontologists, another likely error involves the aspiration to treat the entire aging process, which recent advances suggest is somewhat utopian. It also risks neglecting a more modest but realizable goal: to develop preventative treatments that partially protect against aging. PMID:26136770

  18. No Action Assurance Regarding EPA-Issued Step 2 Prevention of Significant Deterioraiton Permits and Related Title V Requirements Following Utility Air Regulatory Group v. Environmental Protection Agency

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  19. Response to Petition Seeking the EPA to Protect Wisconsin Families from Air Pollution by Issuing a Notice of Deficiency

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  20. A High-Sensitivity Hydraulic Load Cell for Small Kitchen Appliances

    PubMed Central

    Pačnik, Roman; Novak, Franc

    2010-01-01

    In this paper we present a hydraulic load cell made from hydroformed metallic bellows. The load cell was designed for a small kitchen appliance with the weighing function integrated into the composite control and protection of the appliance. It is a simple, low-cost solution with small dimensions and represents an alternative to the existing hydraulic load cells in industrial use. A good non-linearity and a small hysteresis were achieved. The influence of temperature leads to an error of 7.5%, which can be compensated for by software to meet the requirements of the target application. PMID:22163665

  1. Generating higher-order quantum dissipation from lower-order parametric processes

    NASA Astrophysics Data System (ADS)

    Mundhada, S. O.; Grimm, A.; Touzard, S.; Vool, U.; Shankar, S.; Devoret, M. H.; Mirrahimi, M.

    2017-06-01

    The stabilisation of quantum manifolds is at the heart of error-protected quantum information storage and manipulation. Nonlinear driven-dissipative processes achieve such stabilisation in a hardware efficient manner. Josephson circuits with parametric pump drives implement these nonlinear interactions. In this article, we propose a scheme to engineer a four-photon drive and dissipation on a harmonic oscillator by cascading experimentally demonstrated two-photon processes. This would stabilise a four-dimensional degenerate manifold in a superconducting resonator. We analyse the performance of the scheme using numerical simulations of a realisable system with experimentally achievable parameters.

  2. Inconsistency in the Calculation of Volatile Organic Compound (VOC) Emission Rates Using the Results of U.S. Environmental Protection Agency (EPA) Methods 25 and 25A.

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  3. Response to Letter Regarding the Louisiana Department of Environmental Protection's Position on the PSD Significant Emission Level for ODS at Existing Major Sources

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  4. Petition to Object to the Proposed Title V Permit for Sunbury Generation, LP's Power Plant Issued by the Pennsylvania Department of Environmental Protection

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  5. A high-sensitivity hydraulic load cell for small kitchen appliances.

    PubMed

    Pačnik, Roman; Novak, Franc

    2010-01-01

    In this paper we present a hydraulic load cell made from hydroformed metallic bellows. The load cell was designed for a small kitchen appliance with the weighing function integrated into the composite control and protection of the appliance. It is a simple, low-cost solution with small dimensions and represents an alternative to the existing hydraulic load cells in industrial use. A good non-linearity and a small hysteresis were achieved. The influence of temperature leads to an error of 7.5%, which can be compensated for by software to meet the requirements of the target application.

  6. Piezoelectric Instruments of High Natural Frequency Vibration Characteristics and Protection Against Interference by Mass Forces

    NASA Technical Reports Server (NTRS)

    Gohlka, Werner

    1943-01-01

    The exploration of the processes accompanying engine combustion demands quick-responding pressure-recording instruments, among which the piezoelectric type has found widespread use because of its especially propitious properties as vibration-recording instruments for high frequencies. Lacking appropriate test methods, the potential errors of piezoelectric recorders in dynamic measurements could only be estimated up to now. In the present report a test method is described by means of which the resonance curves of the piezoelectric pickup can be determined; hence an instrumental appraisal of the vibration characteristics of piezoelectric recorders is obtainable.

  7. Imprudent Gastro-protective Approach in Majority of Specialists’ Clinics of a Tertiary Hospital

    PubMed Central

    Patel, Hardik Rameshbhai

    2016-01-01

    Introduction One out of four prescriptions in out-patient departments contains a gastro-protective drug (APUD) - PPI/ H2 Blockers/ Antacids/ Ulcer Protective’s. These drugs should be prescribed only when there is a justified indication. To assess the prescriptions of gastro-protective agents for appropriateness and rationality, in a tertiary care hospital setup. Materials and Methods It was a cross-sectional observational study conducted from Aug 2013 to Dec 2013 at OPDs of a Tertiary Care Teaching Hospital, Pune. A total of 260 prescriptions containing gastro-protective agents were analysed for appropriateness and rationality. Rationality of drug use was assessed by referring to standard textbooks and guidelines. Cost difference data was analysed by Wilcoxon signed rank test using GraphPad Prism 6. Results Most common class of gastro-protective agents was Proton pump inhibitors (PPIs)-73.77% (Pantoprazole & Dexrabeprazole). Only 37.3% prescriptions had an adequate indication for these drugs {GI prophylaxis (29.6%) and Acid Peptic Disease treatment (7.7%)}. Two irrational Fixed dose combinations found in the study were PPI with prokinetic agent (n=65) and Proton Pump Inhibitor + NSAID combination (n=2). Formulation, spelling and strength errors were found with 75 prescribed drugs. Medication instructions were lacking with most of the drugs. Drug interactions with co-prescribed drugs could be anticipated in 79 cases. Injudicious use of anti-peptic ulcer agents significantly increased the cost of prescriptions (p<0.0001). Conclusion Anti-ulcer drugs are overenthusiastically prescribed by all specialties which can predispose to adverse effects, drug interactions, increased cost and even erroneous prescriptions. PMID:27134889

  8. Wound Ballistics Modeling for Blast Loading Blunt Force Impact and Projectile Penetration.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Paul A.

    Light body armor development for the warfighter is based on trial-and-error testing of prototype designs against ballistic projectiles. Torso armor testing against blast is nonexistent but necessary to protect the heart and lungs. In tests against ballistic projectiles, protective apparel is placed over ballistic clay and the projectiles are fired into the armor/clay target. The clay represents the human torso and its behind-armor, permanent deflection is the principal metric used to assess armor protection. Although this approach provides relative merit assessment of protection, it does not examine the behind-armor blunt trauma to crucial torso organs. We propose a modeling andmore » simulation (M&S) capability for wound injury scenarios to the head, neck, and torso of the warfighter. We will use this toolset to investigate the consequences of, and mitigation against, blast exposure, blunt force impact, and ballistic projectile penetration leading to damage of critical organs comprising the central nervous, cardiovascular, and respiratory systems. We will leverage Sandia codes and our M&S expertise on traumatic brain injury to develop virtual anatomical models of the head, neck, and torso and the simulation methodology to capture the physics of wound mechanics. Specifically, we will investigate virtual wound injuries to the head, neck, and torso without and with protective armor to demonstrate the advantages of performing injury simulations for the development of body armor. The proposed toolset constitutes a significant advance over current methods by providing a virtual simulation capability to investigate wound injury and optimize armor design without the need for extensive field testing.« less

  9. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.

  10. An Efficient Quantum Somewhat Homomorphic Symmetric Searchable Encryption

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqiang; Wang, Ting; Sun, Zhiwei; Wang, Ping; Yu, Jianping; Xie, Weixin

    2017-04-01

    In 2009, Gentry first introduced an ideal lattices fully homomorphic encryption (FHE) scheme. Later, based on the approximate greatest common divisor problem, learning with errors problem or learning with errors over rings problem, FHE has developed rapidly, along with the low efficiency and computational security. Combined with quantum mechanics, Liang proposed a symmetric quantum somewhat homomorphic encryption (QSHE) scheme based on quantum one-time pad, which is unconditional security. And it was converted to a quantum fully homomorphic encryption scheme, whose evaluation algorithm is based on the secret key. Compared with Liang's QSHE scheme, we propose a more efficient QSHE scheme for classical input states with perfect security, which is used to encrypt the classical message, and the secret key is not required in the evaluation algorithm. Furthermore, an efficient symmetric searchable encryption (SSE) scheme is constructed based on our QSHE scheme. SSE is important in the cloud storage, which allows users to offload search queries to the untrusted cloud. Then the cloud is responsible for returning encrypted files that match search queries (also encrypted), which protects users' privacy.

  11. An Unsupervised Deep Hyperspectral Anomaly Detector

    PubMed Central

    Ma, Ning; Peng, Yu; Wang, Shaojun

    2018-01-01

    Hyperspectral image (HSI) based detection has attracted considerable attention recently in agriculture, environmental protection and military applications as different wavelengths of light can be advantageously used to discriminate different types of objects. Unfortunately, estimating the background distribution and the detection of interesting local objects is not straightforward, and anomaly detectors may give false alarms. In this paper, a Deep Belief Network (DBN) based anomaly detector is proposed. The high-level features and reconstruction errors are learned through the network in a manner which is not affected by previous background distribution assumption. To reduce contamination by local anomalies, adaptive weights are constructed from reconstruction errors and statistical information. By using the code image which is generated during the inference of DBN and modified by adaptively updated weights, a local Euclidean distance between under test pixels and their neighboring pixels is used to determine the anomaly targets. Experimental results on synthetic and recorded HSI datasets show the performance of proposed method outperforms the classic global Reed-Xiaoli detector (RXD), local RX detector (LRXD) and the-state-of-the-art Collaborative Representation detector (CRD). PMID:29495410

  12. Electronic health systems: challenges faced by hospital-based providers.

    PubMed

    Agno, Christina Farala; Guo, Kristina L

    2013-01-01

    The purpose of this article is to discuss specific challenges faced by hospitals adopting the use of electronic medical records and implementing electronic health record (EHR) systems. Challenges include user and information technology support; ease of technical use and software interface capabilities; compliance; and financial, legal, workforce training, and development issues. Electronic health records are essential to preventing medical errors, increasing consumer trust and use of the health system, and improving quality and overall efficiency. Government efforts are focused on ways to accelerate the adoption and use of EHRs as a means of facilitating data sharing, protecting health information privacy and security, quickly identifying emerging public health threats, and reducing medical errors and health care costs and increasing quality of care. This article will discuss physician and nonphysician staff training before, during, and after implementation; the effective use of EHR systems' technical features; the selection of a capable and secure EHR system; and the development of collaborative system implementation. Strategies that are necessary to help health care providers achieve successful implementation of EHR systems will be addressed.

  13. Relationships between maternal emotional expressiveness and children's sensitivity to teacher criticism.

    PubMed

    Mizokawa, Ai

    2013-01-01

    Caregivers' emotional responses to children influence children's social and emotional development. This study investigated the association between maternal emotional expressiveness in the context of mother-child interactions and young children's sensitivity to teacher criticism. Sensitivity to teacher criticism was assessed among 53 Japanese preschoolers using hypothetical scenarios in which a puppet child representing the participant made a small error, and a puppet teacher pointed out the error. Self-report questionnaires were used to measure maternal expressiveness. The results demonstrated that negative maternal expressiveness toward one's own children was positively related to children's ratings of their own ability and negatively related to children's motivation to continue with the task after teacher criticism. Positive maternal expressiveness was not related to children's sensitivity to criticism. These findings suggest that children who have experienced more negative emotion from mothers may be more likely to hold negative beliefs about how others will respond to their behavior more generally. This may, in turn, lead to a defensively positive view of one's own abilities and a disinclination to persevere as protection from additional opportunities for teacher evaluation.

  14. Protective effect of C-peptide on experimentally induced diabetic nephropathy and the possible link between C-peptide and nitric oxide.

    PubMed

    Elbassuoni, Eman A; Aziz, Neven M; El-Tahawy, Nashwa F

    2018-06-01

    Diabetic nephropathy one of the major microvascular diabetic complications. Besides hyperglycemia, other factors contribute to the development of diabetic complications as the proinsulin connecting peptide, C-peptide. We described the role of C-peptide replacement therapy on experimentally induced diabetic nephropathy, and its potential mechanisms of action by studying the role of nitric oxide (NO) as a mediator of C-peptide effects by in vivo modulating its production by N G -nitro-l-arginine methyl ester (L-NAME). Renal injury markers measured were serum urea, creatinine, tumor necrosis factor alpha, and angiotensin II, and malondialdehyde, total antioxidant, Bcl-2, and NO in renal tissue. In conclusion, diabetic induction resulted in islet degenerations and decreased insulin secretion with its metabolic consequences and subsequent renal complications. C-Peptide deficiencies in diabetes might have contributed to the metabolic and renal error, since C-peptide treatment to the diabetic rats completely corrected these errors. The beneficial effects of C-peptide are partially antagonized by L-NAME coadministration, indicating that NO partially mediates C-peptide effects.

  15. A Degree Distribution Optimization Algorithm for Image Transmission

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  16. Embedding intensity image into a binary hologram with strong noise resistant capability

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-11-01

    A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.

  17. Protective Behaviour of Citizens to Transport Accidents Involving Hazardous Materials: A Discrete Choice Experiment Applied to Populated Areas nearby Waterways

    PubMed Central

    de Bekker-Grob, Esther W.; Bergstra, Arnold D.; Bliemer, Michiel C. J.; Trijssenaar-Buhre, Inge J. M.; Burdorf, Alex

    2015-01-01

    Background To improve the information for and preparation of citizens at risk to hazardous material transport accidents, a first important step is to determine how different characteristics of hazardous material transport accidents will influence citizens’ protective behaviour. However, quantitative studies investigating citizens’ protective behaviour in case of hazardous material transport accidents are scarce. Methods A discrete choice experiment was conducted among subjects (19–64 years) living in the direct vicinity of a large waterway. Scenarios were described by three transport accident characteristics: odour perception, smoke/vapour perception, and the proportion of people in the environment that were leaving at their own discretion. Subjects were asked to consider each scenario as realistic and to choose the alternative that was most appealing to them: staying, seeking shelter, or escaping. A panel error component model was used to quantify how different transport accident characteristics influenced subjects’ protective behaviour. Results The response was 44% (881/1,994). The predicted probability that a subject would stay ranged from 1% in case of a severe looking accident till 62% in case of a mild looking accident. All three transport accident characteristics proved to influence protective behaviour. Particularly a perception of strong ammonia or mercaptan odours and visible smoke/vapour close to citizens had the strongest positive influence on escaping. In general, ‘escaping’ was more preferred than ‘seeking shelter’, although stated preference heterogeneity among subjects for these protective behaviour options was substantial. Males were less willing to seek shelter than females, whereas elderly people were more willing to escape than younger people. Conclusion Various characteristics of transport accident involving hazardous materials influence subjects’ protective behaviour. The preference heterogeneity shows that information needs to be targeted differently depending on gender and age to prepare citizens properly. PMID:26571374

  18. Camera system considerations for geomorphic applications of SfM photogrammetry

    USGS Publications Warehouse

    Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John

    2017-01-01

    The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights reserved.Regression analysis of 67 reviewed datasets revealed that the best explanatory variable to predict accuracy of SfM data is photographic scale. Despite the prevalent use of object distance ratios to describe scale, nominal ground sample distance is shown to be a superior metric, explaining 68% of the variability in mean absolute vertical error.

  19. Effects of calcium on the incidence of recurrent colorectal adenomas

    PubMed Central

    Veettil, Sajesh K.; Ching, Siew Mooi; Lim, Kean Ghee; Saokaew, Surasak; Phisalprapa, Pochamana; Chaiyakunapruk, Nathorn

    2017-01-01

    Abstract Background: Protective effects of calcium supplementation against colorectal adenomas have been documented in systematic reviews; however, the results have not been conclusive. Our objective was to update and systematically evaluate the evidence for calcium supplementation taking into consideration the risks of systematic and random error and to GRADE the evidence. Methods: The study comprised a systematic review with meta-analysis and trial sequential analysis (TSA) of randomized controlled trials (RCTs). We searched for RCTs published up until September 2016. Retrieved trials were evaluated using risk of bias. Primary outcome measures were the incidences of any recurrent adenomas and of advanced adenomas. Meta-analytic estimates were calculated with the random-effects model and random errors were evaluated with trial sequential analyses (TSAs). Results: Five randomized trials (2234 patients with a history of adenomas) were included. Two of the 5 trials showed either unclear or high risks of bias in most criteria. Meta-analysis of good quality RCTs suggest a moderate protective effect of calcium supplementation on recurrence of adenomas (relative risk [RR], 0.88 [95% CI 0.79–0.99]); however, its effects on advanced adenomas did not show statistical significance (RR, 1.02 [95% CI 0.67–1.55]). Subgroup analyses demonstrated a greater protective effect on recurrence of adenomas with elemental calcium dose ≥1600 mg/day (RR, 0.74 [95% CI 0.56–0.97]) compared to ≤1200 mg/day (RR, 0.84 [95% CI 0.73–0.97]). No major serious adverse events were associated with the use of calcium, but there was an increase in the incidence of hypercalcemia (P = .0095). TSA indicated a lack of firm evidence for a beneficial effect. Concerns with directness and imprecision rated down the quality of the evidence to “low.” Conclusion: The available good quality RCTs suggests a possible beneficial effect of calcium supplementation on the recurrence of adenomas; however, TSA indicated that the accumulated evidence is still inconclusive. Using GRADE-methodology, we conclude that the quality of evidence is low. Large well-designed randomized trials with low risk of bias are needed. PMID:28796047

  20. An effective biometric discretization approach to extract highly discriminative, informative, and privacy-protective binary representation

    NASA Astrophysics Data System (ADS)

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2011-12-01

    Biometric discretization derives a binary string for each user based on an ordered set of biometric features. This representative string ought to be discriminative, informative, and privacy protective when it is employed as a cryptographic key in various security applications upon error correction. However, it is commonly believed that satisfying the first and the second criteria simultaneously is not feasible, and a tradeoff between them is always definite. In this article, we propose an effective fixed bit allocation-based discretization approach which involves discriminative feature extraction, discriminative feature selection, unsupervised quantization (quantization that does not utilize class information), and linearly separable subcode (LSSC)-based encoding to fulfill all the ideal properties of a binary representation extracted for cryptographic applications. In addition, we examine a number of discriminative feature-selection measures for discretization and identify the proper way of setting an important feature-selection parameter. Encouraging experimental results vindicate the feasibility of our approach.

  1. p53 protects against genome instability following centriole duplication failure

    PubMed Central

    Lambrus, Bramwell G.; Uetake, Yumi; Clutario, Kevin M.; Daggubati, Vikas; Snyder, Michael; Sluder, Greenfield

    2015-01-01

    Centriole function has been difficult to study because of a lack of specific tools that allow persistent and reversible centriole depletion. Here we combined gene targeting with an auxin-inducible degradation system to achieve rapid, titratable, and reversible control of Polo-like kinase 4 (Plk4), a master regulator of centriole biogenesis. Depletion of Plk4 led to a failure of centriole duplication that produced an irreversible cell cycle arrest within a few divisions. This arrest was not a result of a prolonged mitosis, chromosome segregation errors, or cytokinesis failure. Depleting p53 allowed cells that fail centriole duplication to proliferate indefinitely. Washout of auxin and restoration of endogenous Plk4 levels in cells that lack centrioles led to the penetrant formation of de novo centrioles that gained the ability to organize microtubules and duplicate. In summary, we uncover a p53-dependent surveillance mechanism that protects against genome instability by preventing cell growth after centriole duplication failure. PMID:26150389

  2. Public health consequences on vulnerable populations from acute chemical releases.

    PubMed

    Ruckart, Perri Zeitz; Orr, Maureen F

    2008-07-09

    Data from a large, multi-state surveillance system on acute chemical releases were analyzed to describe the type of events that are potentially affecting vulnerable populations (children, elderly and hospitalized patients) in order to better prevent and plan for these types of incidents in the future. During 2003-2005, there were 231 events where vulnerable populations were within ¼ mile of the event and the area of impact was greater than 200 feet from the facility/point of release. Most events occurred on a weekday during times when day care centers or schools were likely to be in session. Equipment failure and human error caused a majority of the releases. Agencies involved in preparing for and responding to chemical emergencies should work with hospitals, nursing homes, day care centers, and schools to develop policies and procedures for initiating appropriate protective measures and managing the medical needs of patients. Chemical emergency response drills should involve the entire community to protect those that may be more susceptible to harm.

  3. Public Health Consequences on Vulnerable Populations from Acute Chemical Releases

    PubMed Central

    Ruckart, Perri Zeitz; Orr, Maureen F.

    2008-01-01

    Data from a large, multi-state surveillance system on acute chemical releases were analyzed to describe the type of events that are potentially affecting vulnerable populations (children, elderly and hospitalized patients) in order to better prevent and plan for these types of incidents in the future. During 2003–2005, there were 231 events where vulnerable populations were within ¼ mile of the event and the area of impact was greater than 200 feet from the facility/point of release. Most events occurred on a weekday during times when day care centers or schools were likely to be in session. Equipment failure and human error caused a majority of the releases. Agencies involved in preparing for and responding to chemical emergencies should work with hospitals, nursing homes, day care centers, and schools to develop policies and procedures for initiating appropriate protective measures and managing the medical needs of patients. Chemical emergency response drills should involve the entire community to protect those that may be more susceptible to harm. PMID:21572842

  4. Self-referenced axial chromatic dispersion measurement in multiphoton microscopy through 2-color THG imaging.

    PubMed

    Du, Yu; Zhuang, Ziwei; He, Jiexing; Liu, Hongji; Qiu, Ping; Wang, Ke

    2018-05-16

    With tunable excitation light, multiphoton microscopy (MPM) is widely used for imaging biological structures at subcellular resolution. Axial chromatic dispersion, present in virtually every transmissive optical system including the multiphoton microscope, leads to focal (and the resultant image) plane separation. Here we demonstrate experimentally a technique to measure the axial chromatic dispersion in a multiphoton microscope, using simultaneous 2-color third-harmonic generation (THG) imaging excited by a 2-color soliton source with tunable wavelength separation. Our technique is self-referenced, eliminating potential measurement error when 1-color tunable excitation light is used which necessitates reciprocating motion of the mechanical translation stage. Using this technique, we demonstrate measured axial chromatic dispersion with 2 different objective lenses in a multiphoton microscope. Further measurement in a biological sample also indicates that this axial chromatic dispersion, in combination with 2-color imaging, may open up opportunity for simultaneous imaging of two different axial planes. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. Flexible Macroblock Ordering for Context-Aware Ultrasound Video Transmission over Mobile WiMAX

    PubMed Central

    Martini, Maria G.; Hewage, Chaminda T. E. R.

    2010-01-01

    The most recent network technologies are enabling a variety of new applications, thanks to the provision of increased bandwidth and better management of Quality of Service. Nevertheless, telemedical services involving multimedia data are still lagging behind, due to the concern of the end users, that is, clinicians and also patients, about the low quality provided. Indeed, emerging network technologies should be appropriately exploited by designing the transmission strategy focusing on quality provision for end users. Stemming from this principle, we propose here a context-aware transmission strategy for medical video transmission over WiMAX systems. Context, in terms of regions of interest (ROI) in a specific session, is taken into account for the identification of multiple regions of interest, and compression/transmission strategies are tailored to such context information. We present a methodology based on H.264 medical video compression and Flexible Macroblock Ordering (FMO) for ROI identification. Two different unequal error protection methodologies, providing higher protection to the most diagnostically relevant data, are presented. PMID:20827292

  6. Caregiving for Parkinson's disease patients: an exploration of a stress-appraisal model for quality of life and burden.

    PubMed

    Goldsworthy, Belinda; Knowles, Simon

    2008-11-01

    Extending the 2002 stress-appraisal model of Chappell and Reid, we examined the relationships between caregiver stressors (e.g., cognitive impairment and functional dependency of the recipient), appraisal (informal hours of caregiving), and protective factors (e.g., social support, self-esteem, and quality of the caregiver-recipient relationship) associated with the burden and quality of life of Parkinson's disease caregivers. There were 136 caregivers (M = 64.59 years) who completed an online survey. Using structural equation modeling, we found that the extended stress-appraisal model of Chappell and Reid provided a good fit to the data (chi2 = 67.87, df = 55, p >.05; chi2/df = 1.23, Comparative Fit Index = 0.98, Root Mean Square Error of Approximation = 0.04). This study provides an important contribution to a growing field of research that applies theoretical models to investigate the stressors, appraisals, and protective factors that impact caregiver well-being.

  7. Repurposing the clinical record: can an existing natural language processing system de-identify clinical notes?

    PubMed

    Morrison, Frances P; Li, Li; Lai, Albert M; Hripcsak, George

    2009-01-01

    Electronic clinical documentation can be useful for activities such as public health surveillance, quality improvement, and research, but existing methods of de-identification may not provide sufficient protection of patient data. The general-purpose natural language processor MedLEE retains medical concepts while excluding the remaining text so, in addition to processing text into structured data, it may be able provide a secondary benefit of de-identification. Without modifying the system, the authors tested the ability of MedLEE to remove protected health information (PHI) by comparing 100 outpatient clinical notes with the corresponding XML-tagged output. Of 809 instances of PHI, 26 (3.2%) were detected in output as a result of processing and identification errors. However, PHI in the output was highly transformed, much appearing as normalized terms for medical concepts, potentially making re-identification more difficult. The MedLEE processor may be a good enhancement to other de-identification systems, both removing PHI and providing coded data from clinical text.

  8. The role of medical staff in providing patients rights.

    PubMed

    Masic, Izet; Izetbegovic, Sebija

    2014-01-01

    Among the priority basic human rights, without a doubt, are the right to life and health-social protection. The process of implementation of human rights in the everyday life of an ordinary citizen in the post-war recovery of Bosnia and Herzegovina faces huge objective and subjective difficulties. Citizens need to be affordable adequate healthcare facilities that will be open to all on equal terms. The term hospital activity implies a set of measures, activities and procedures that are undertaken for the purpose of treatment, diagnosis and medical rehabilitation of patients in the respective health institutions. Principles of hospital care should include: Comprehensiveness (Hospital care is available to all citizens equally); Continuity (Provided is continuous medical care to all users); Availability (Provided approximately equal protection of rights for all citizens). Education of health professionals: The usual threats to patient safety include medical errors, infections occurred in the hospital, unnecessary exposure to high doses of radiation and the use of the wrong drug. Everyday continuing education in the profession of a doctor is lifelong.

  9. Innovative Advances in Connectivity and Community Pharmacist Patient Care Services: Implications for Patient Safety.

    PubMed

    Bacci, Jennifer L; Berenbrok, Lucas A

    2018-06-07

    The scope of community pharmacy practice has expanded beyond the provision of drug product to include the provision of patient care services. Likewise, the community pharmacist's approach to patient safety must also expand beyond prevention of errors during medication dispensing to include optimization of medications and prevention of adverse events throughout the entire medication use process. Connectivity to patient data and other healthcare providers has been a longstanding challenge in community pharmacy with implications for the delivery and safety of patient care. Here, we describe three innovative advances in connectivity in community pharmacy practice that enhance patient safety in the provision of community pharmacist patient care services across the entire medication use process. Specifically, we discuss the growing use of immunization information systems, quality improvement platforms, and health information exchanges in community pharmacy practice and their implications for patient safety. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  10. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration. Given the ease-of-use and cost benefits of test strips, we recommend further development of test strips robust to pH variation and appropriate for Ebola-relevant chlorine solution concentrations. PMID:27243817

  11. Thermomechanical assessment of the effects of a jaw-beam angle during beam impact on Large Hadron Collider collimators

    NASA Astrophysics Data System (ADS)

    Cauchi, Marija; Assmann, R. W.; Bertarelli, A.; Carra, F.; Lari, L.; Rossi, A.; Mollicone, P.; Sammut, N.

    2015-02-01

    The correct functioning of a collimation system is crucial to safely and successfully operate high-energy particle accelerators, such as the Large Hadron Collider (LHC). However, the requirements to handle high-intensity beams can be demanding, and accident scenarios must be well studied in order to assess if the collimator design is robust against possible error scenarios. One of the catastrophic, though not very probable, accident scenarios identified within the LHC is an asynchronous beam dump. In this case, one (or more) of the 15 precharged kicker circuits fires out of time with the abort gap, spraying beam pulses onto LHC machine elements before the machine protection system can fire the remaining kicker circuits and bring the beam to the dump. If a proton bunch directly hits a collimator during such an event, severe beam-induced damage such as magnet quenches and other equipment damage might result, with consequent downtime for the machine. This study investigates a number of newly defined jaw error cases, which include angular misalignment errors of the collimator jaw. A numerical finite element method approach is presented in order to precisely evaluate the thermomechanical response of tertiary collimators to beam impact. We identify the most critical and interesting cases, and show that a tilt of the jaw can actually mitigate the effect of an asynchronous dump on the collimators. Relevant collimator damage limits are taken into account, with the aim to identify optimal operational conditions for the LHC.

  12. Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes

    NASA Astrophysics Data System (ADS)

    Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd

    2016-04-01

    In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.

  13. System dynamic modelling of industrial growth and landscape ecology in China.

    PubMed

    Xu, Jian; Kang, Jian; Shao, Long; Zhao, Tianyu

    2015-09-15

    With the rapid development of large industrial corridors in China, the landscape ecology of the country is currently being affected. Therefore, in this study, a system dynamic model with multi-dimensional nonlinear dynamic prediction function that considers industrial growth and landscape ecology is developed and verified to allow for more sustainable development. Firstly, relationships between industrial development and landscape ecology in China are examined, and five subsystems are then established: industry, population, urban economy, environment and landscape ecology. The main influencing factors are then examined for each subsystem to establish flow charts connecting those factors. Consequently, by connecting the subsystems, an overall industry growth and landscape ecology model is established. Using actual data and landscape index calculated based on GIS of the Ha-Da-Qi industrial corridor, a typical industrial corridor in China, over the period 2005-2009, the model is validated in terms of historical behaviour, logical structure and future prediction, where for 84.8% of the factors, the error rate of the model is less than 5%, the mean error rate of all factors is 2.96% and the error of the simulation test for the landscape ecology subsystem is less than 2%. Moreover, a model application has been made to consider the changes in landscape indices under four industrial development modes, and the optimal industrial growth plan has been examined for landscape ecological protection through the simulation prediction results over 2015-2020. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Fine-scale landscape genetics of the American badger (Taxidea taxus): disentangling landscape effects and sampling artifacts in a poorly understood species

    PubMed Central

    Kierepka, E M; Latch, E K

    2016-01-01

    Landscape genetics is a powerful tool for conservation because it identifies landscape features that are important for maintaining genetic connectivity between populations within heterogeneous landscapes. However, using landscape genetics in poorly understood species presents a number of challenges, namely, limited life history information for the focal population and spatially biased sampling. Both obstacles can reduce power in statistics, particularly in individual-based studies. In this study, we genotyped 233 American badgers in Wisconsin at 12 microsatellite loci to identify alternative statistical approaches that can be applied to poorly understood species in an individual-based framework. Badgers are protected in Wisconsin owing to an overall lack in life history information, so our study utilized partial redundancy analysis (RDA) and spatially lagged regressions to quantify how three landscape factors (Wisconsin River, Ecoregions and land cover) impacted gene flow. We also performed simulations to quantify errors created by spatially biased sampling. Statistical analyses first found that geographic distance was an important influence on gene flow, mainly driven by fine-scale positive spatial autocorrelations. After controlling for geographic distance, both RDA and regressions found that Wisconsin River and Agriculture were correlated with genetic differentiation. However, only Agriculture had an acceptable type I error rate (3–5%) to be considered biologically relevant. Collectively, this study highlights the benefits of combining robust statistics and error assessment via simulations and provides a method for hypothesis testing in individual-based landscape genetics. PMID:26243136

  15. Environmental fate model for ultra-low-volume insecticide applications used for adult mosquito management

    USGS Publications Warehouse

    Schleier, Jerome J.; Peterson, Robert K.D.; Irvine, Kathryn M.; Marshall, Lucy M.; Weaver, David K.; Preftakes, Collin J.

    2012-01-01

    One of the more effective ways of managing high densities of adult mosquitoes that vector human and animal pathogens is ultra-low-volume (ULV) aerosol applications of insecticides. The U.S. Environmental Protection Agency uses models that are not validated for ULV insecticide applications and exposure assumptions to perform their human and ecological risk assessments. Currently, there is no validated model that can accurately predict deposition of insecticides applied using ULV technology for adult mosquito management. In addition, little is known about the deposition and drift of small droplets like those used under conditions encountered during ULV applications. The objective of this study was to perform field studies to measure environmental concentrations of insecticides and to develop a validated model to predict the deposition of ULV insecticides. The final regression model was selected by minimizing the Bayesian Information Criterion and its prediction performance was evaluated using k-fold cross validation. Density of the formulation and the density and CMD interaction coefficients were the largest in the model. The results showed that as density of the formulation decreases, deposition increases. The interaction of density and CMD showed that higher density formulations and larger droplets resulted in greater deposition. These results are supported by the aerosol physics literature. A k-fold cross validation demonstrated that the mean square error of the selected regression model is not biased, and the mean square error and mean square prediction error indicated good predictive ability.

  16. Image based Monte Carlo Modeling for Computational Phantom

    NASA Astrophysics Data System (ADS)

    Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican

    2014-06-01

    The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.

  17. Contamination characteristics and source apportionment of trace metals in soils around Miyun Reservoir.

    PubMed

    Chen, Haiyang; Teng, Yanguo; Chen, Ruihui; Li, Jiao; Wang, Jinsheng

    2016-08-01

    Due to their toxicity and bioaccumulation, trace metals in soils can result in a wide range of toxic effects on animals, plants, microbes, and even humans. Recognizing the contamination characteristics of soil metals and especially apportioning their potential sources are the necessary preconditions for pollution prevention and control. Over the past decades, several receptor models have been developed for source apportionment. Among them, positive matrix factorization (PMF) has gained popularity and was recommended by the US Environmental Protection Agency as a general modeling tool. In this study, an extended chemometrics model, multivariate curve resolution-alternating least squares based on maximum likelihood principal component analysis (MCR-ALS/MLPCA), was proposed for source apportionment of soil metals and applied to identify the potential sources of trace metals in soils around Miyun Reservoir. Similar to PMF, the MCR-ALS/MLPCA model can incorporate measurement error information and non-negativity constraints in its calculation procedures. Model validation with synthetic dataset suggested that the MCR-ALS/MLPCA could extract acceptable recovered source profiles even considering relatively larger error levels. When applying to identify the sources of trace metals in soils around Miyun Reservoir, the MCR-ALS/MLPCA model obtained the highly similar profiles with PMF. On the other hand, the assessment results of contamination status showed that the soils around reservoir were polluted by trace metals in slightly moderate degree but potentially posed acceptable risks to the public. Mining activities, fertilizers and agrochemicals, and atmospheric deposition were identified as the potential anthropogenic sources with contributions of 24.8, 14.6, and 13.3 %, respectively. In order to protect the drinking water source of Beijing, special attention should be paid to the metal inputs to soils from mining and agricultural activities.

  18. Characteristics of Interruptions During Medication Administration:An Integrative Review of Direct Observational Studies.

    PubMed

    Schroers, Ginger

    2018-06-26

    The purpose of this review was to synthesize and summarize data gathered by direct observation of the characteristics of interruptions in the context of nursing medication administration in hospital settings. Interruptions are prevalent during the medication administration process performed by nurses in hospital settings and have been found to be associated with an increase in frequency and severity of nursing medication administration errors. In addition, interruptions decrease task efficiency, leading to longer medication administration completion times. Integrative review. The electronic databases Cumulative Index of Nursing and Allied Health Literature (CINAHL), PubMED, PsyARTICLES, and Google Scholar were searched using the terms "interruptions" AND "medication administration" AND "direct observation". Nine articles met the inclusion criteria. Interruptions are likely to occur at least once during nursing medication administration processes in hospital settings. This finding applies to medication administered to one patient, termed a medication pass, and medication administered to multiple patients, termed a mediation round. Interruptions are most commonly caused by another nurse, staff member, or are self-initiated, and last approximately one minute in length. A raised awareness among staff of the most common sources of interruptions may encourage changes that lead to a decrease in the occurrence of interruptions. In addition, nurse leaders can apply an understanding of the common characteristics of interruptions to guide research, policies, and educational methods aimed at interruption management strategies. The findings from this review can be used to guide the identification and development of targeted interventions and strategies that would have the most substantial impact to reduce and manage interruptions during medication administration. Interruption management strategies have the potential to lead to a decrease in medication errors and an increase in task efficiency. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. Protective effects of amphetamine on gastric ulcerations induced by indomethacin in rats

    PubMed Central

    Sandor, Vlaicu; Cuparencu, Barbu; Dumitrascu, Dan L; Birt, Mircea A; Krausz, Tibor L

    2006-01-01

    AIM: To study the effects of amphetamine, an indirect-acting adrenomimetic compound on the indomethacin-induced gastric ulcerations in rats. METHODS: Male Wistar-Bratislava rats were randomly divided into four groups: Group 1 (control), received an ulcerogenic dose of indomethacin (50 μmol/kg) and Groups 2, 3 and 4, treated with amphetamine (10, 25 and 50 μmol/kg). The drug was administered simultaneously with indomethacin and once again 4 h later. The animals were sacrificed 8 h after indomethacin treatment. The stomachs were opened and the incidence, the number of lesions and their severity were evaluated. The results were expressed as percentage and as mean ± standard error (mean ± SE). RESULTS: The incidence of ulceration in the control group was 100%. Amphetamine, at doses of 10, 25 and 50 μmol/kg, lowered the incidence to 88.89%, 77.78% and 37.5% respectively. The protection ratio was positive: 24.14%, 55.17% and 80.6% respectively. The total number of ulcerations/rat was 12.44 ± 3.69 in the control group. It decreased to 7.33 ± 1.89, 5.33 ± 2.38 and 2.25 ± 1.97 under the effects of the above-mentioned doses of amphetamine. CONCLUSION: Amphetamine affords a significant dose-dependent protection against the indomethacin-induced gastric ulcerations in rats. It is suggested that the adrenergic system is involved in the gastric mucosa protection. PMID:17131481

  20. A Method for Response Time Measurement of Electrosensitive Protective Devices.

    PubMed

    Dźwiarek, Marek

    1996-01-01

    A great step toward the improvement of safety at work was made when electrosensitive protective devices (ESPDs) were applied to the protection of press and robot-assisted manufacturing system operators. The way the device is mounted is crucial. The parameters of ESPD mounting that ensure safe distance from the controlled dangerous zone are response time, sensitivity, and the dimensions of the detection zone. The proposed experimental procedure of response time measurement is realized in two steps, with a test piece penetrating the detection zone twice. In the first step, low-speed penetration (at a speed v m ) enables the detection zone border to be localized. In the second step of measurement, the probe is injected at a high speed V d . The actuator rod position is measured and when it is equal to the value L registered by the earlier measurements, counting time begins as well as the monitoring of the state of the equipment under test (EUT) output relays. After the state changes, time tp is registered. The experimental procedure is realized on a special experimental stand. Because the stand has been constructed for certification purposes, the design satisfies the requirements imposed by Polski Komitet Normalizacyjny (PKN, 1995). The experimental results prove the measurement error to be smaller than ± 0.6 ms.

Top