Sample records for equal error rate

  1. A negentropy minimization approach to adaptive equalization for digital communication systems.

    PubMed

    Choi, Sooyong; Lee, Te-Won

    2004-07-01

    In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.

  2. The nearest neighbor and the bayes error rates.

    PubMed

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.

  3. A joint equalization algorithm in high speed communication systems

    NASA Astrophysics Data System (ADS)

    Hao, Xin; Lin, Changxing; Wang, Zhaohui; Cheng, Binbin; Deng, Xianjin

    2018-02-01

    This paper presents a joint equalization algorithm in high speed communication systems. This algorithm takes the advantages of traditional equalization algorithms to use pre-equalization and post-equalization. The pre-equalization algorithm takes the advantage of CMA algorithm, which is not sensitive to the frequency offset. Pre-equalization is located before the carrier recovery loop in order to make the carrier recovery loop a better performance and overcome most of the frequency offset. The post-equalization takes the advantage of MMA algorithm in order to overcome the residual frequency offset. This paper analyzes the advantages and disadvantages of several equalization algorithms in the first place, and then simulates the proposed joint equalization algorithm in Matlab platform. The simulation results shows the constellation diagrams and the bit error rate curve, both these results show that the proposed joint equalization algorithm is better than the traditional algorithms. The residual frequency offset is shown directly in the constellation diagrams. When SNR is 14dB, the bit error rate of the simulated system with the proposed joint equalization algorithm is 103 times better than CMA algorithm, 77 times better than MMA equalization, and 9 times better than CMA-MMA equalization.

  4. Research on Signature Verification Method Based on Discrete Fréchet Distance

    NASA Astrophysics Data System (ADS)

    Fang, J. L.; Wu, W.

    2018-05-01

    This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.

  5. Free-space optics mode-wavelength division multiplexing system using LG modes based on decision feedback equalization

    NASA Astrophysics Data System (ADS)

    Amphawan, Angela; Ghazi, Alaan; Al-dawoodi, Aras

    2017-11-01

    A free-space optics mode-wavelength division multiplexing (MWDM) system using Laguerre-Gaussian (LG) modes is designed using decision feedback equalization for controlling mode coupling and combating inter symbol interference so as to increase channel diversity. In this paper, a data rate of 24 Gbps is achieved for a FSO MWDM channel of 2.6 km in length using feedback equalization. Simulation results show significant improvement in eye diagrams and bit-error rates before and after decision feedback equalization.

  6. Evaluating CMA equalization of SOQPSK-TG data for aeronautical telemetry

    NASA Astrophysics Data System (ADS)

    Cole-Rhodes, Arlene; KoneDossongui, Serge; Umuolo, Henry; Rice, Michael

    2015-05-01

    This paper presents the results of using a constant modulus algorithm (CMA) to recover shaped offset quadrature-phase shift keying (SOQPSK)-TG modulated data, which has been transmitted using the iNET data packet structure. This standard is defined and used for aeronautical telemetry. Based on the iNET-packet structure, the adaptive block processing CMA equalizer can be initialized using the minimum mean square error (MMSE) equalizer [3]. This CMA equalizer is being evaluated for use on iNET structured data, with initial tests being conducted on measured data which has been received in a controlled laboratory environment. Thus the CMA equalizer is applied at the receiver to data packets which have been experimentally generated in order to determine the feasibility of our equalization approach, and its performance is compared to that of the MMSE equalizer. Performance evaluation is based on computed bit error rate (BER) counts for these equalizers.

  7. Feedforward Equalizers for MDM-WDM in Multimode Fiber Interconnects

    NASA Astrophysics Data System (ADS)

    Masunda, Tendai; Amphawan, Angela

    2018-04-01

    In this paper, we present new tap configurations of a feedforward equalizer to mitigate mode coupling in a 60-Gbps 18-channel mode-wavelength division multiplexing system in a 2.5-km-long multimode fiber. The performance of the equalization is measured through analyses on eye diagrams, power coupling coefficients and bit-error rates.

  8. Equalization for a page-oriented optical memory system

    NASA Astrophysics Data System (ADS)

    Trelewicz, Jennifer Q.; Capone, Jeffrey

    1999-11-01

    In this work, a method of decision-feedback equalization is developed for a digital holographic channel that experiences moderate-to-severe imaging errors. Decision feedback is utilized, not only where the channel is well-behaved, but also near the edges of the camera grid that are subject to a high degree of imaging error. In addition to these effects, the channel is worsened by typical problems of holographic channels, including non-uniform illumination, dropouts, and stuck bits. The approach described in this paper builds on established methods for performing trained and blind equalization on time-varying channels. The approach is tested on experimental data sets. On most of these data sets, the method of equalization described in this work delivers at least an order of magnitude improvement in bit-error rate (BER) before error-correction coding (ECC). When ECC is introduced, the approach is able to recover stored data with no errors for many of the tested data sets. Furthermore, a low BER was maintained even over a range of small alignment perturbations in the system. It is believed that this equalization method can allow cost reductions to be made in page-memory systems, by allowing for a larger image area per page or less complex imaging components, without sacrificing the low BER required by data storage applications.

  9. Adaptively combined FIR and functional link artificial neural network equalizer for nonlinear communication channel.

    PubMed

    Zhao, Haiquan; Zhang, Jiashu

    2009-04-01

    This paper proposes a novel computational efficient adaptive nonlinear equalizer based on combination of finite impulse response (FIR) filter and functional link artificial neural network (CFFLANN) to compensate linear and nonlinear distortions in nonlinear communication channel. This convex nonlinear combination results in improving the speed while retaining the lower steady-state error. In addition, since the CFFLANN needs not the hidden layers, which exist in conventional neural-network-based equalizers, it exhibits a simpler structure than the traditional neural networks (NNs) and can require less computational burden during the training mode. Moreover, appropriate adaptation algorithm for the proposed equalizer is derived by the modified least mean square (MLMS). Results obtained from the simulations clearly show that the proposed equalizer using the MLMS algorithm can availably eliminate various intensity linear and nonlinear distortions, and be provided with better anti-jamming performance. Furthermore, comparisons of the mean squared error (MSE), the bit error rate (BER), and the effect of eigenvalue ratio (EVR) of input correlation matrix are presented.

  10. 4.5-Gb/s RGB-LED based WDM visible light communication system employing CAP modulation and RLS based adaptive equalization.

    PubMed

    Wang, Yiguang; Huang, Xingxing; Tao, Li; Shi, Jianyang; Chi, Nan

    2015-05-18

    Inter-symbol interference (ISI) is one of the key problems that seriously limit transmission data rate in high-speed VLC systems. To eliminate ISI and further improve the system performance, series of equalization schemes have been widely investigated. As an adaptive algorithm commonly used in wireless communication, RLS is also suitable for visible light communication due to its quick convergence and better performance. In this paper, for the first time we experimentally demonstrate a high-speed RGB-LED based WDM VLC system employing carrier-less amplitude and phase (CAP) modulation and recursive least square (RLS) based adaptive equalization. An aggregate data rate of 4.5Gb/s is successfully achieved over 1.5-m indoor free space transmission with the bit error rate (BER) below the 7% forward error correction (FEC) limit of 3.8x10(-3). To the best of our knowledge, this is the highest data rate ever achieved in RGB-LED based VLC systems.

  11. English speech sound development in preschool-aged children from bilingual English-Spanish environments.

    PubMed

    Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D

    2008-07-01

    English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.

  12. Inter-track interference mitigation with two-dimensional variable equalizer for bit patterned media recording

    NASA Astrophysics Data System (ADS)

    Wang, Yao; Vijaya Kumar, B. V. K.

    2017-05-01

    The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.

  13. Analysis of limiting information characteristics of quantum-cryptography protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sych, D V; Grishanin, Boris A; Zadkov, Viktor N

    2005-01-31

    The problem of increasing the critical error rate of quantum-cryptography protocols by varying a set of letters in a quantum alphabet for space of a fixed dimensionality is studied. Quantum alphabets forming regular polyhedra on the Bloch sphere and the continual alphabet equally including all the quantum states are considered. It is shown that, in the absence of basis reconciliation, a protocol with the tetrahedral alphabet has the highest critical error rate among the protocols considered, while after the basis reconciliation, a protocol with the continual alphabet possesses the highest critical error rate. (quantum optics and quantum computation)

  14. Equalization and detection for digital communication over nonlinear bandlimited satellite communication channels. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gutierrez, Alberto, Jr.

    1995-01-01

    This dissertation evaluates receiver-based methods for mitigating the effects due to nonlinear bandlimited signal distortion present in high data rate satellite channels. The effects of the nonlinear bandlimited distortion is illustrated for digitally modulated signals. A lucid development of the low-pass Volterra discrete time model for a nonlinear communication channel is presented. In addition, finite-state machine models are explicitly developed for a nonlinear bandlimited satellite channel. A nonlinear fixed equalizer based on Volterra series has previously been studied for compensation of noiseless signal distortion due to a nonlinear satellite channel. This dissertation studies adaptive Volterra equalizers on a downlink-limited nonlinear bandlimited satellite channel. We employ as figure of merits performance in the mean-square error and probability of error senses. In addition, a receiver consisting of a fractionally-spaced equalizer (FSE) followed by a Volterra equalizer (FSE-Volterra) is found to give improvement beyond that gained by the Volterra equalizer. Significant probability of error performance improvement is found for multilevel modulation schemes. Also, it is found that probability of error improvement is more significant for modulation schemes, constant amplitude and multilevel, which require higher signal to noise ratios (i.e., higher modulation orders) for reliable operation. The maximum likelihood sequence detection (MLSD) receiver for a nonlinear satellite channel, a bank of matched filters followed by a Viterbi detector, serves as a probability of error lower bound for the Volterra and FSE-Volterra equalizers. However, this receiver has not been evaluated for a specific satellite channel. In this work, an MLSD receiver is evaluated for a specific downlink-limited satellite channel. Because of the bank of matched filters, the MLSD receiver may be high in complexity. Consequently, the probability of error performance of a more practical suboptimal MLSD receiver, requiring only a single receive filter, is evaluated.

  15. Single Carrier with Frequency Domain Equalization for Synthetic Aperture Underwater Acoustic Communications

    PubMed Central

    He, Chengbing; Xi, Rui; Wang, Han; Jing, Lianyou; Shi, Wentao; Zhang, Qunfei

    2017-01-01

    Phase-coherent underwater acoustic (UWA) communication systems typically employ multiple hydrophones in the receiver to achieve spatial diversity gain. However, small underwater platforms can only carry a single transducer which can not provide spatial diversity gain. In this paper, we propose single-carrier with frequency domain equalization (SC-FDE) for phase-coherent synthetic aperture acoustic communications in which a virtual array is generated by the relative motion between the transmitter and the receiver. This paper presents synthetic aperture acoustic communication results using SC-FDE through data collected during a lake experiment in January 2016. The performance of two receiver algorithms is analyzed and compared, including the frequency domain equalizer (FDE) and the hybrid time frequency domain equalizer (HTFDE). The distances between the transmitter and the receiver in the experiment were about 5 km. The bit error rate (BER) and output signal-to-noise ratio (SNR) performances with different receiver elements and transmission numbers were presented. After combining multiple transmissions, error-free reception using a convolution code with a data rate of 8 kbps was demonstrated. PMID:28684683

  16. Performance analysis of adaptive equalization for coherent acoustic communications in the time-varying ocean environment.

    PubMed

    Preisig, James C

    2005-07-01

    Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.

  17. New spatial diversity equalizer based on PLL

    NASA Astrophysics Data System (ADS)

    Rao, Wei

    2011-10-01

    A new Spatial Diversity Equalizer (SDE) based on phase-locked loop (PLL) is proposed to overcome the inter-symbol interference (ISI) and phase rotations simultaneously in the digital communication system. The proposed SDE consists of equal gain combining technique based on a famous blind equalization algorithm constant modulus algorithm (CMA) and a PLL. Compared with conventional SDE, the proposed SDE has not only faster convergence rate and lower residual error but also the ability to recover carrier phase rotation. The efficiency of the method is proved by computer simulation.

  18. Augmented burst-error correction for UNICON laser memory. [digital memory

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1974-01-01

    A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.

  19. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    PubMed

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  20. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  1. Decision feedback equalizer for holographic data storage.

    PubMed

    Kim, Kyuhwan; Kim, Seung Hun; Koo, Gyogwon; Seo, Min Seok; Kim, Sang Woo

    2018-05-20

    Holographic data storage (HDS) has attracted much attention as a next-generation storage medium. Because HDS suffers from two-dimensional (2D) inter-symbol interference (ISI), the partial-response maximum-likelihood (PRML) method has been studied to reduce 2D ISI. However, the PRML method has various drawbacks. To solve the problems, we propose a modified decision feedback equalizer (DFE) for HDS. To prevent the error propagation problem, which is a typical problem in DFEs, we also propose a reliability factor for HDS. Various simulations were executed to analyze the performance of the proposed methods. The proposed methods showed fast processing speed after training, superior bit error rate performance, and consistency.

  2. Iterative Frequency Domain Decision Feedback Equalization and Decoding for Underwater Acoustic Communications

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Ge, Jian-Hua

    2012-12-01

    Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.

  3. The architecture of blind equalizer for MIMO free space optical communication system

    NASA Astrophysics Data System (ADS)

    Li, Hongwei; Huang, Yongmei

    2016-10-01

    The free space optical (FSO) communication system has attracted many researchers from different countries, owning to its advantages such as high security, high speed and anti-interference. Among all kinds of the channels of the FSO communication system, the atmosphere channel is very difficult to deal with for two typical disadvantages at least. The one is the scintillation of the optical carrier intensity caused by the atmosphere turbulence and the other is the multipath effect by the optical scattering. A lot of studies have shown that the MIMO (Multiple Input Multiple Output) technology can overcome the scintillation of the optical carrier through the atmosphere effectively. So the background of this paper is a MIMO system which includes multiple optical transmitting antennas and multiple optical receiving antennas. A number of particles such as hazes, water droplets and aerosols exit in the atmosphere widely. When optical carrier meets these particles, the scattering phenomenon is inevitable, which leads to the multipath effect. As a result, a optical pulse transmitted by the optical transmitter becomes wider, to some extent, when it gets to the optical receiver due to the multipath effect. If the information transmission rate is quite low, there is less relationship between the multipath effect and the bit error rate (BER) of the communication system. Once the information transmission rate increases to a high level, the multipath effect will produce the problem called inter symbol inference (ISI) seriously and the bit error rate will increase severely. In order to take the advantage of the FSO communication system, the inter symbol inference problem must be solved. So it is necessary to use the channel equalization technology. This paper aims at deciding a equalizer and designing suitable equalization algorithm for a MIMO free space optical communication system to overcome the serious problem of bit error rate. The reliability and the efficiency of communication are two important indexes. For a MIMO communication system, there are two typical equalization methods. The first method, every receiving antenna has an independent equalizer without the information derived from the other receiving antennas. The second, the information derived from all of the receiving antennas mixes with each other, according to some definite rules, which is called space-time equalization. The former is discussed in this paper. The equalization algorithm concludes training mode and non training mode. The training mode needs training codes transmitted by the transmitter during the whole communication process and this mode reduces the communication efficiency more or less. In order to improve the communication efficiency, the blind equalization algorithm, a non training mode, is used to solve the parameter of the equalizer. In this paper, firstly, the atmosphere channel is described focusing on the scintillation and multipath effect of the optical carrier. Then, the structure of a equalizer of MIMO free space optical communication system is introduced. In the next part of this paper, the principle of the blind equalization algorithm is introduced. In addition, the simulation results are showed. In the end of this paper, the conclusions and the future work are discussed.

  4. Reducing Error Rates for Iris Image using higher Contrast in Normalization process

    NASA Astrophysics Data System (ADS)

    Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa

    2017-08-01

    Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.

  5. Viterbi equalization for long-distance, high-speed underwater laser communication

    NASA Astrophysics Data System (ADS)

    Hu, Siqi; Mi, Le; Zhou, Tianhua; Chen, Weibiao

    2017-07-01

    In long-distance, high-speed underwater laser communication, because of the strong absorption and scattering processes, the laser pulse is stretched with the increase in communication distance and the decrease in water clarity. The maximum communication bandwidth is limited by laser-pulse stretching. Improving the communication rate increases the intersymbol interference (ISI). To reduce the effect of ISI, the Viterbi equalization (VE) algorithm is used to estimate the maximum-likelihood receiving sequence. The Monte Carlo method is used to simulate the stretching of the received laser pulse and the maximum communication rate at a wavelength of 532 nm in Jerlov IB and Jerlov II water channels with communication distances of 80, 100, and 130 m, respectively. The high-data rate communication performance for the VE and hard-decision algorithms is compared. The simulation results show that the VE algorithm can be used to reduce the ISI by selecting the minimum error path. The trade-off between the high-data rate communication performance and minor bit-error rate performance loss makes VE a promising option for applications in long-distance, high-speed underwater laser communication systems.

  6. Equalization of nonlinear transmission impairments by maximum-likelihood-sequence estimation in digital coherent receivers.

    PubMed

    Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro

    2010-03-01

    We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.

  7. The calculation of average error probability in a digital fibre optical communication system

    NASA Astrophysics Data System (ADS)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  8. Low complexity adaptive equalizers for underwater acoustic communications

    NASA Astrophysics Data System (ADS)

    Soflaei, Masoumeh; Azmi, Paeiz

    2014-08-01

    Interference signals due to scattering from surface and reflecting from bottom is one of the most important problems of reliable communications in shallow water channels. To solve this problem, one of the best suggested ways is to use adaptive equalizers. Convergence rate and misadjustment error in adaptive algorithms play important roles in adaptive equalizer performance. In this paper, affine projection algorithm (APA), selective regressor APA(SR-APA), family of selective partial update (SPU) algorithms, family of set-membership (SM) algorithms and selective partial update selective regressor APA (SPU-SR-APA) are compared with conventional algorithms such as the least mean square (LMS) in underwater acoustic communications. We apply experimental data from the Strait of Hormuz for demonstrating the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE) of SR-APA, SPU-APA, SPU-normalized least mean square (SPU-NLMS), SPU-SR-APA, SM-APA and SM-NLMS algorithms decrease in comparison with the LMS algorithm. Also these algorithms have better convergence rates than LMS type algorithm.

  9. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  10. Pairwise comparisons and visual perceptions of equal area polygons.

    PubMed

    Adamic, P; Babiy, V; Janicki, R; Kakiashvili, T; Koczkodaj, W W; Tadeusiewicz, R

    2009-02-01

    The number of studies related to visual perception has been plentiful in recent years. Participants rated the areas of five randomly generated shapes of equal area, using a reference unit area that was displayed together with the shapes. Respondents were 179 university students from Canada and Poland. The average error estimated by respondents using the unit square was 25.75%. The error was substantially decreased to 5.51% when the shapes were compared to one another in pairs. This gain of 20.24% for this two-dimensional experiment was substantially better than the 11.78% gain reported in the previous one-dimensional experiments. This is the first statistically sound two-dimensional experiment demonstrating that pairwise comparisons improve accuracy.

  11. An experiment in software reliability: Additional analyses using data from automated replications

    NASA Technical Reports Server (NTRS)

    Dunham, Janet R.; Lauterbach, Linda A.

    1988-01-01

    A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.

  12. A successive overrelaxation iterative technique for an adaptive equalizer

    NASA Technical Reports Server (NTRS)

    Kosovych, O. S.

    1973-01-01

    An adaptive strategy for the equalization of pulse-amplitude-modulated signals in the presence of intersymbol interference and additive noise is reported. The successive overrelaxation iterative technique is used as the algorithm for the iterative adjustment of the equalizer coefficents during a training period for the minimization of the mean square error. With 2-cyclic and nonnegative Jacobi matrices substantial improvement is demonstrated in the rate of convergence over the commonly used gradient techniques. The Jacobi theorems are also extended to nonpositive Jacobi matrices. Numerical examples strongly indicate that the improvements obtained for the special cases are possible for general channel characteristics. The technique is analytically demonstrated to decrease the mean square error at each iteration for a large range of parameter values for light or moderate intersymbol interference and for small intervals for general channels. Analytically, convergence of the relaxation algorithm was proven in a noisy environment and the coefficient variance was demonstrated to be bounded.

  13. Joint Frequency-Domain Equalization and Despreading for Multi-Code DS-CDMA Using Cyclic Delay Transmit Diversity

    NASA Astrophysics Data System (ADS)

    Yamamoto, Tetsuya; Takeda, Kazuki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. To further improve the BER performance, cyclic delay transmit diversity (CDTD) can be used. CDTD simultaneously transmits the same signal from different antennas after adding different cyclic delays to increase the number of equivalent propagation paths. Although a joint use of CDTD and MMSE-FDE for direct sequence code division multiple access (DS-CDMA) achieves larger frequency diversity gain, the BER performance improvement is limited by the residual inter-chip interference (ICI) after FDE. In this paper, we propose joint FDE and despreading for DS-CDMA using CDTD. Equalization and despreading are simultaneously performed in the frequency-domain to suppress the residual ICI after FDE. A theoretical conditional BER analysis is presented for the given channel condition. The BER analysis is confirmed by computer simulation.

  14. Random access to mobile networks with advanced error correction

    NASA Technical Reports Server (NTRS)

    Dippold, Michael

    1990-01-01

    A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.

  15. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  16. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  17. Error analysis for reducing noisy wide-gap concentric cylinder rheometric data for nonlinear fluids - Theory and applications

    NASA Technical Reports Server (NTRS)

    Borgia, Andrea; Spera, Frank J.

    1990-01-01

    This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.

  18. Global distortion of GPS networks associated with satellite antenna model errors

    NASA Astrophysics Data System (ADS)

    Cardellach, E.; Elósegui, P.; Davis, J. L.

    2007-07-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by ˜1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PCO errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm yr-1 level, which will impact high-precision crustal deformation studies.

  19. Global Distortion of GPS Networks Associated with Satellite Antenna Model Errors

    NASA Technical Reports Server (NTRS)

    Cardellach, E.; Elosequi, P.; Davis, J. L.

    2007-01-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by approx.1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PC0 errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm/yr level, which will impact high-precision crustal deformation studies.

  20. A boundary-optimized rejection region test for the two-sample binomial problem.

    PubMed

    Gabriel, Erin E; Nason, Martha; Fay, Michael P; Follmann, Dean A

    2018-03-30

    Testing the equality of 2 proportions for a control group versus a treatment group is a well-researched statistical problem. In some settings, there may be strong historical data that allow one to reliably expect that the control proportion is one, or nearly so. While one-sample tests or comparisons to historical controls could be used, neither can rigorously control the type I error rate in the event the true control rate changes. In this work, we propose an unconditional exact test that exploits the historical information while controlling the type I error rate. We sequentially construct a rejection region by first maximizing the rejection region in the space where all controls have an event, subject to the constraint that our type I error rate does not exceed α for any true event rate; then with any remaining α we maximize the additional rejection region in the space where one control avoids the event, and so on. When the true control event rate is one, our test is the most powerful nonrandomized test for all points in the alternative space. When the true control event rate is nearly one, we demonstrate that our test has equal or higher mean power, averaging over the alternative space, than a variety of well-known tests. For the comparison of 4 controls and 4 treated subjects, our proposed test has higher power than all comparator tests. We demonstrate the properties of our proposed test by simulation and use our method to design a malaria vaccine trial. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  1. Linewidth-tolerant 10-Gbit/s 16-QAM transmission using a pilot-carrier based phase-noise cancelling technique.

    PubMed

    Nakamura, Moriya; Kamio, Yukiyoshi; Miyazaki, Tetsuya

    2008-07-07

    We experimentally demonstrated linewidth-tolerant 10-Gbit/s (2.5-Gsymbol/s) 16-quadrature amplitude modulation (QAM) by using a distributed-feedback laser diode (DFB-LD) with a linewidth of 30 MHz. Error-free operation, a bit-error rate (BER) of <10(-9) was achieved in transmission over 120 km of standard single mode fiber (SSMF) without any dispersion compensation. The phase-noise canceling capability provided by a pilot-carrier and standard electronic pre-equalization to suppress inter-symbol interference (ISI) gave clear 16-QAM constellations and floor-less BER characteristics. We evaluated the BER characteristics by real-time measurement of six (three different thresholds for each I- and Q-component) symbol error rates (SERs) with simultaneous constellation observation.

  2. 6.4 Tb/s (32 × 200 Gb/s) WDM direct-detection transmission with twin-SSB modulation and Kramers-Kronig receiver

    NASA Astrophysics Data System (ADS)

    Zhu, Yixiao; Jiang, Mingxuan; Ruan, Xiaoke; Chen, Zeyu; Li, Chenjia; Zhang, Fan

    2018-05-01

    We experimentally demonstrate 6.4 Tb/s wavelength division multiplexed (WDM) direct-detection transmission based on Nyquist twin-SSB modulation over 25 km SSMF with bit error rates (BERs) below the 20% hard-decision forward error correction (HD-FEC) threshold of 1.5 × 10-2. The two sidebands of each channel are separately detected using Kramers-Kronig receiver without MIMO equalization. We also carry out numerical simulations to evaluate the system robustness against I/Q amplitude imbalance, I/Q phase deviation and the extinction ratio of modulator, respectively. Furthermore, we show in simulation that the requirement of steep edge optical filter can be relaxed if multi-input-multi-output (MIMO) equalization between the two sidebands is used.

  3. Usefulness of biological fingerprint in magnetic resonance imaging for patient verification.

    PubMed

    Ueda, Yasuyuki; Morishita, Junji; Kudomi, Shohei; Ueda, Katsuhiko

    2016-09-01

    The purpose of our study is to investigate the feasibility of automated patient verification using multi-planar reconstruction (MPR) images generated from three-dimensional magnetic resonance (MR) imaging of the brain. Several anatomy-related MPR images generated from three-dimensional fast scout scan of each MR examination were used as biological fingerprint images in this study. The database of this study consisted of 730 temporal pairs of MR examination of the brain. We calculated the correlation value between current and prior biological fingerprint images of the same patient and also all combinations of two images for different patients to evaluate the effectiveness of our method for patient verification. The best performance of our system were as follows: a half-total error rate of 1.59 % with a false acceptance rate of 0.023 % and a false rejection rate of 3.15 %, an equal error rate of 1.37 %, and a rank-one identification rate of 98.6 %. Our method makes it possible to verify the identity of the patient using only some existing medical images without the addition of incidental equipment. Also, our method will contribute to patient misidentification error management caused by human errors.

  4. A fast-initializing digital equalizer with on-line tracking for data communications

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Barksdale, W. J.

    1974-01-01

    A theory is developed for a digital equalizer for use in reducing intersymbol interference (ISI) on high speed data communications channels. The equalizer is initialized with a single isolated transmitter pulse, provided the signal-to-noise ratio (SNR) is not unusually low, then switches to a decision directed, on-line mode of operation that allows tracking of channel variations. Conditions for optimal tap-gain settings are obtained first for a transversal equalizer structure by using a mean squared error (MSE) criterion, a first order gradient algorithm to determine the adjustable equalizer tap-gains, and a sequence of isolated initializing pulses. Since the rate of tap-gain convergence depends on the eigenvalues of a channel output correlation matrix, convergence can be improved by making a linear transformation on to obtain a new correlation matrix.

  5. Pilot-Assisted Channel Estimation for Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.

  6. Adaptive Pre-FFT Equalizer with High-Precision Channel Estimator for ISI Channels

    NASA Astrophysics Data System (ADS)

    Yoshida, Makoto

    We present an attractive approach for OFDM transmission using an adaptive pre-FFT equalizer, which can select ICI reduction mode according to channel condition, and a degenerated-inverse-matrix-based channel estimator (DIME), which uses a cyclic sinc-function matrix uniquely determined by transmitted subcarriers. In addition to simulation results, the proposed system with an adaptive pre-FFT equalizer and DIME has been laboratory tested by using a software defined radio (SDR)-based test bed. The simulation and experimental results demonstrated that the system at a rate of more than 100Mbps can provide a bit error rate of less than 10-3 for a fast multi-path fading channel that has a moving velocity of more than 200km/h with a delay spread of 1.9µs (a maximum delay path of 7.3µs) in the 5-GHz band.

  7. A methodology based on reduced complexity algorithm for system applications using microprocessors

    NASA Technical Reports Server (NTRS)

    Yan, T. Y.; Yao, K.

    1988-01-01

    The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.

  8. Aniseikonia quantification: error rate of rule of thumb estimation.

    PubMed

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  9. Report of the 1988 2-D Intercomparison Workshop, chapter 3

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar

    1989-01-01

    Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.

  10. Probability of Detection of Genotyping Errors and Mutations as Inheritance Inconsistencies in Nuclear-Family Data

    PubMed Central

    Douglas, Julie A.; Skol, Andrew D.; Boehnke, Michael

    2002-01-01

    Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel’s laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele–error model, we find that detection rates are 51%–77% for multiallelic markers and 13%–75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree—rather than genotyping—errors in the early stages of a genome scan. Such early assessments are valuable in either the targeting of families for resampling or discontinued genotyping. PMID:11791214

  11. System for and method of freezing biological tissue

    NASA Technical Reports Server (NTRS)

    Williams, T. E.; Cygnarowicz, T. A. (Inventor)

    1978-01-01

    Biological tissue is frozen while a polyethylene bag placed in abutting relationship against opposed walls of a pair of heaters. The bag and tissue are cooled with refrigerating gas at a time programmed rate at least equal to the maximum cooling rate needed at any time during the freezing process. The temperature of the bag, and hence of the tissue, is compared with a time programmed desired value for the tissue temperature to derive an error indication. The heater is activated in response to the error indication so that the temperature of the tissue follows the desired value for the time programmed tissue temperature. The tissue is heated to compensate for excessive cooling of the tissue as a result of the cooling by the refrigerating gas. In response to the error signal, the heater is deactivated while the latent heat of fusion is being removed from the tissue while the tissue is changing phase from liquid to solid.

  12. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  13. RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.

  14. Bandwidth efficient bidirectional 5 Gb/s overlapped-SCM WDM PON with electronic equalization and forward-error correction.

    PubMed

    Buset, Jonathan M; El-Sahn, Ziad A; Plant, David V

    2012-06-18

    We demonstrate an improved overlapped-subcarrier multiplexed (O-SCM) WDM PON architecture transmitting over a single feeder using cost sensitive intensity modulation/direct detection transceivers, data re-modulation and simple electronics. Incorporating electronic equalization and Reed-Solomon forward-error correction codes helps to overcome the bandwidth limitation of a remotely seeded reflective semiconductor optical amplifier (RSOA)-based ONU transmitter. The O-SCM architecture yields greater spectral efficiency and higher bit rates than many other SCM techniques while maintaining resilience to upstream impairments. We demonstrate full-duplex 5 Gb/s transmission over 20 km and analyze BER performance as a function of transmitted and received power. The architecture provides flexibility to network operators by relaxing common design constraints and enabling full-duplex operation at BER ∼ 10(-10) over a wide range of OLT launch powers from 3.5 to 8 dBm.

  15. Pitfalls of inferring annual mortality from inspection of published survival curves.

    PubMed

    Singer, R B

    1994-01-01

    In many FU articles currently published, results are given primarily in the form of graphs of survival curves, rather than in the form of life table data. Sometimes the authors may comment on the slope of the survival curve as though it were equal to the annual mortality rate (after reversal of the minus sign to a plus sign). Even if no comment of this sort is made, medical directors and underwriters may be tempted to think along similar lines in trying to interpret the significance of the survival curve in terms of mortality. However it is a very serious error of life table methodology to conceive of mortality rate as equal to the negative slope of the survival curve. The nature of the error is demonstrated in this article. An annual mortality rate derived from the survival curve actually depends on two variables: a quotient with the negative slope (sign reversed), delta P/ delta as the numerator, and the survival rate, P, itself as the denominator. The implications of this relationship are discussed. If there are two "parallel" survival curves with the same slope at a given time duration, the lower curve will have a higher mortality rate than the upper curve. A constant slope with increasing duration means that the annual mortality rate also increases with duration. Some characteristics of high initial mortality are also discussed and their relation to different units of FU time.(ABSTRACT TRUNCATED AT 250 WORDS)

  16. Control of adaptive optic element displacement with the help of a magnetic rheology drive

    NASA Astrophysics Data System (ADS)

    Deulin, Eugeni A.; Mikhailov, Valeri P.; Sytchev, Victor V.

    2000-10-01

    The control system of adaptive optic of a large astronomical segmentated telescope was designed and tested. The dynamic model and the amplitude-frequency analysis of the new magnetic rheology (MR) drive are presented. The loop controlled drive consists of hydrostatic carrier, MR hydraulic loop controlling system, elastic thin wall seal, stainless seal which are united in a single three coordinate manipulator. This combination ensures short positioning error (delta) (phi)

  17. A Two-Stage Approach for Improving the Convergence of Least-Mean-Square Adaptive Decision-Feedback Equalizers in the Presence of Severe Narrowband Interference

    NASA Astrophysics Data System (ADS)

    Batra, Arun; Zeidler, James R.; Beex, A. A. Louis

    2007-12-01

    It has previously been shown that a least-mean-square (LMS) decision-feedback filter can mitigate the effect of narrowband interference (L.-M. Li and L. Milstein, 1983). An adaptive implementation of the filter was shown to converge relatively quickly for mild interference. It is shown here, however, that in the case of severe narrowband interference, the LMS decision-feedback equalizer (DFE) requires a very large number of training symbols for convergence, making it unsuitable for some types of communication systems. This paper investigates the introduction of an LMS prediction-error filter (PEF) as a prefilter to the equalizer and demonstrates that it reduces the convergence time of the two-stage system by as much as two orders of magnitude. It is also shown that the steady-state bit-error rate (BER) performance of the proposed system is still approximately equal to that attained in steady-state by the LMS DFE-only. Finally, it is shown that the two-stage system can be implemented without the use of training symbols. This two-stage structure lowers the complexity of the overall system by reducing the number of filter taps that need to be adapted, while incurring a slight loss in the steady-state BER.

  18. The value of Tablets as reading aids for individuals with central visual field loss: an evaluation of eccentric reading with static and scrolling text.

    PubMed

    Walker, Robin; Bryan, Lauren; Harvey, Hannah; Riazi, Afsane; Anderson, Stephen J

    2016-07-01

    Technological devices such as smartphones and tablets are widely available and increasingly used as visual aids. This study evaluated the use of a novel app for tablets (MD_evReader) developed as a reading aid for individuals with a central field loss resulting from macular degeneration. The MD_evReader app scrolls text as single lines (similar to a news ticker) and is intended to enhance reading performance using the eccentric viewing technique by both reducing the demands on the eye movement system and minimising the deleterious effects of perceptual crowding. Reading performance with scrolling text was compared with reading static sentences, also presented on a tablet computer. Twenty-six people with low vision (diagnosis of macular degeneration) read static or dynamic text (scrolled from right to left), presented as a single line at high contrast on a tablet device. Reading error rates and comprehension were recorded for both text formats, and the participant's subjective experience of reading with the app was assessed using a simple questionnaire. The average reading speed for static and dynamic text was not significantly different and equal to or greater than 85 words per minute. The comprehension scores for both text formats were also similar, equal to approximately 95% correct. However, reading error rates were significantly (p = 0.02) less for dynamic text than for static text. The participants' questionnaire ratings of their reading experience with the MD_evReader were highly positive and indicated a preference for reading with this app compared with their usual method. Our data show that reading performance with scrolling text is at least equal to that achieved with static text and in some respects (reading error rate) is better than static text. Bespoke apps informed by an understanding of the underlying sensorimotor processes involved in a cognitive task such as reading have excellent potential as aids for people with visual impairments. © 2016 The Authors Ophthalmic and Physiological Optics published by John Wiley & Sons Ltd on behalf of College of Optometrists.

  19. Adaptive 84.44-190 Mbit/s phosphor-LED wireless communication utilizing no blue filter at practical transmission distance.

    PubMed

    Yeh, C H; Chow, C W; Chen, H Y; Chen, J; Liu, Y L

    2014-04-21

    We propose and experimentally demonstrate a white-light phosphor-LED visible light communication (VLC) system with an adaptive 84.44 to 190 Mbit/s 16 quadrature-amplitude-modulation (QAM) orthogonal-frequency-division-multiplexing (OFDM) signal utilizing bit-loading method. Here, the optimal analogy pre-equalization design is performed at LED transmitter (Tx) side and no blue filter is used at the Rx side. Hence, the ~1 MHz modulation bandwidth of phosphor-LED could be extended to 30 MHz. In addition, the measured bit error rates (BERs) of < 3.8 × 10(-3) [forward error correction (FEC) threshold] at different measured data rates can be achieved at practical transmission distances of 0.75 to 2 m.

  20. Systems Issues Pertaining to Holographic Optical Data Storage in Thick Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Oezcan, Meric; Smithey, Daniel T.; Crew, Marshall; Lau, Sonie (Technical Monitor)

    1998-01-01

    The optical data storage capacity and raw bit-error-rate achievable with thick photochromic bacteriorhodopsin (BR) films are investigated for sequential recording and read- out of angularly- and shift-multiplexed digital holograms inside a thick blue-membrane D85N BR film. We address the determination of an exposure schedule that produces equal diffraction efficiencies among each of the multiplexed holograms. This exposure schedule is determined by numerical simulations of the holographic recording process within the BR material, and maximizes the total grating strength. We also experimentally measure the shift selectivity and compare the results to theoretical predictions. Finally, we evaluate the bit-error-rate of a single hologram, and of multiple holograms stored within the film.

  1. Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory

    NASA Astrophysics Data System (ADS)

    Yan, Daqin; Wang, Fuzhong; Wang, Shuo

    2017-12-01

    Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.

  2. Type I error probabilities based on design-stage strategies with applications to noninferiority trials.

    PubMed

    Rothmann, Mark

    2005-01-01

    When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.

  3. Dual processing and diagnostic errors.

    PubMed

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  4. A biometric identification system based on eigenpalm and eigenfinger features.

    PubMed

    Ribaric, Slobodan; Fratric, Ivan

    2005-11-01

    This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).

  5. Indoor visible light communication with smart lighting technology

    NASA Astrophysics Data System (ADS)

    Das Barman, Abhirup; Halder, Alak

    2017-02-01

    An indoor visible-light communication performance is investigated utilizing energy efficient white light by 2D LED arrays. Enabled by recent advances in LED technology, IEEE 802.15.7 standardizes high-data-rate visible light communication and advocates for colour shift keying (CSK) modulation to overcome flicker and to support dimming. Voronoi segmentation is employed for decoding N-CSK constellation which has superior performance compared to other existing decoding methods. The two chief performance degrading effects of inter-symbol interference and LED nonlinearity is jointly mitigated using LMS post equalization at the receiver which improves the symbol error rate performance and increases field of view of the receiver. It is found that LMS post equalization symbol at 250MHz offers 7dB SNR improvement at SER10-6

  6. Social contact patterns can buffer costs of forgetting in the evolution of cooperation.

    PubMed

    Stevens, Jeffrey R; Woike, Jan K; Schooler, Lael J; Lindner, Stefan; Pachur, Thorsten

    2018-06-13

    Analyses of the evolution of cooperation often rely on two simplifying assumptions: (i) individuals interact equally frequently with all social network members and (ii) they accurately remember each partner's past cooperation or defection. Here, we examine how more realistic, skewed patterns of contact-in which individuals interact primarily with only a subset of their network's members-influence cooperation. In addition, we test whether skewed contact patterns can counteract the decrease in cooperation caused by memory errors (i.e. forgetting). Finally, we compare two types of memory error that vary in whether forgotten interactions are replaced with random actions or with actions from previous encounters. We use evolutionary simulations of repeated prisoner's dilemma games that vary agents' contact patterns, forgetting rates and types of memory error. We find that highly skewed contact patterns foster cooperation and also buffer the detrimental effects of forgetting. The type of memory error used also influences cooperation rates. Our findings reveal previously neglected but important roles of contact pattern, type of memory error and the interaction of contact pattern and memory on cooperation. Although cognitive limitations may constrain the evolution of cooperation, social contact patterns can counteract some of these constraints. © 2018 The Author(s).

  7. 45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.

    PubMed

    Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile

    2012-07-30

    In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.

  8. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  9. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  10. EMG Versus Torque Control of Human-Machine Systems: Equalizing Control Signal Variability Does not Equalize Error or Uncertainty.

    PubMed

    Johnson, Reva E; Kording, Konrad P; Hargrove, Levi J; Sensinger, Jonathon W

    2017-06-01

    In this paper we asked the question: if we artificially raise the variability of torque control signals to match that of EMG, do subjects make similar errors and have similar uncertainty about their movements? We answered this question using two experiments in which subjects used three different control signals: torque, torque+noise, and EMG. First, we measured error on a simple target-hitting task in which subjects received visual feedback only at the end of their movements. We found that even when the signal-to-noise ratio was equal across EMG and torque+noise control signals, EMG resulted in larger errors. Second, we quantified uncertainty by measuring the just-noticeable difference of a visual perturbation. We found that for equal errors, EMG resulted in higher movement uncertainty than both torque and torque+noise. The differences suggest that performance and confidence are influenced by more than just the noisiness of the control signal, and suggest that other factors, such as the user's ability to incorporate feedback and develop accurate internal models, also have significant impacts on the performance and confidence of a person's actions. We theorize that users have difficulty distinguishing between random and systematic errors for EMG control, and future work should examine in more detail the types of errors made with EMG control.

  11. Performance of the hybrid MLPNN based VE (hMLPNN-VE) for the nonlinear PMR channels

    NASA Astrophysics Data System (ADS)

    Wongsathan, Rati; Phakphisut, Watid; Supnithi, Pornchai

    2018-05-01

    This paper proposes a hybrid of multilayer perceptron neural network (MLPNN) and Volterra equalizer (VE) denoted hMLPNN-VE in nonlinear perpendicular magnetic recording (PMR) channels. The proposed detector integrates the nonlinear product terms of the delayed readback signals generated from the VE into the nonlinear processing of the MLPNN. The detection performance comparison is evaluated in terms of the tradeoff between the bit error rate (BER), complexity and reliability for a nonlinear Volterra channel at high normalized recording density. The proposed hMLPNN-VE outperforms MLPNN based equalizer (MLPNNE), VE and the conventional partial response maximum likelihood (PRML) detector.

  12. TRAC based sensing for autonomous rendezvous

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.; Monford, Leo

    1991-01-01

    The Targeting Reflective Alignment Concept (TRAC) sensor is to be used in an effort to support an Autonomous Rendezvous and Docking (AR&D) flight experiment. The TRAC sensor uses a fixed-focus, fixed-iris CCD camera and a target that is a combination of active and passive components. The system experiment is anticipated to fly in 1994 using two Commercial Experiment Transporters (COMET's). The requirements for the sensor are: bearing error less than or equal to 0.075 deg; bearing error rate less than 0.3 deg/sec; attitude error less than 0.5 deg.; and attitude rate error less than 2.0 deg/sec. The range requirement depends on the range and the range rate of the vehicle. The active component of the target is several 'kilo-bright' LED's that can emit 2500 millicandela with 40 milliwatts of input power. Flashing the lights in a known pattern eliminates background illumination. The system should be able to rendezvous from 300 meters all the way to capture. A question that arose during the presentation: What is the life time of the LED's and their sensitivity to radiation? The LED's should be manufactured to Military Specifications, coated with silicon dioxide, and all other space qualified precautions should be taken. The LED's will not be on all the time so they should easily last the two-year mission.

  13. The interval testing procedure: A general framework for inference in functional data analysis.

    PubMed

    Pini, Alessia; Vantini, Simone

    2016-09-01

    We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package. © 2016, The International Biometric Society.

  14. Enhanced intercarrier interference mitigation based on encoded bit-sequence distribution inside optical superchannels

    NASA Astrophysics Data System (ADS)

    Torres, Jhon James Granada; Soto, Ana María Cárdenas; González, Neil Guerrero

    2016-10-01

    In the context of gridless optical multicarrier systems, we propose a method for intercarrier interference (ICI) mitigation which allows bit error correction in scenarios of nonspectral flatness between the subcarriers composing the multicarrier system and sub-Nyquist carrier spacing. We propose a hybrid ICI mitigation technique which exploits the advantages of signal equalization at both levels: the physical level for any digital and analog pulse shaping, and the bit-data level and its ability to incorporate advanced correcting codes. The concatenation of these two complementary techniques consists of a nondata-aided equalizer applied to each optical subcarrier, and a hard-decision forward error correction applied to the sequence of bits distributed along the optical subcarriers regardless of prior subchannel quality assessment as performed in orthogonal frequency-division multiplexing modulations for the implementation of the bit-loading technique. The impact of the ICI is systematically evaluated in terms of bit-error-rate as a function of the carrier frequency spacing and the roll-off factor of the digital pulse-shaping filter for a simulated 3×32-Gbaud single-polarization quadrature phase shift keying Nyquist-wavelength division multiplexing system. After the ICI mitigation, a back-to-back error-free decoding was obtained for sub-Nyquist carrier spacings of 28.5 and 30 GHz and roll-off values of 0.1 and 0.4, respectively.

  15. Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.

    PubMed

    Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay

    2015-12-01

    In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.

  16. Contribution of stimulus attributes to errors in duration and distance judgments--a developmental study.

    PubMed

    Matsuda, F; Lan, W C; Tanimura, R

    1999-02-01

    In Matsuda's 1996 study, 4- to 11-yr.-old children (N = 133) watched two cars running on two parallel tracks on a CRT display and judged whether their durations and distances were equal and, if not, which was larger. In the present paper, the relative contributions of the four critical stimulus attributes (whether temporal starting points, temporal stopping points, spatial starting points, and spatial stopping points were the same or different between two cars) to the production of errors were quantitatively estimated based on the data for rates of errors obtained by Matsuda. The present analyses made it possible not only to understand numerically the findings about qualitative characteristics of the critical attributes described by Matsuda, but also to add more detailed findings about them.

  17. Servo control booster system for minimizing following error

    DOEpatents

    Wise, W.L.

    1979-07-26

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  18. Individual identification via electrocardiogram analysis.

    PubMed

    Fratini, Antonio; Sansone, Mario; Bifulco, Paolo; Cesarelli, Mario

    2015-08-14

    During last decade the use of ECG recordings in biometric recognition studies has increased. ECG characteristics made it suitable for subject identification: it is unique, present in all living individuals, and hard to forge. However, in spite of the great number of approaches found in literature, no agreement exists on the most appropriate methodology. This study aimed at providing a survey of the techniques used so far in ECG-based human identification. Specifically, a pattern recognition perspective is here proposed providing a unifying framework to appreciate previous studies and, hopefully, guide future research. We searched for papers on the subject from the earliest available date using relevant electronic databases (Medline, IEEEXplore, Scopus, and Web of Knowledge). The following terms were used in different combinations: electrocardiogram, ECG, human identification, biometric, authentication and individual variability. The electronic sources were last searched on 1st March 2015. In our selection we included published research on peer-reviewed journals, books chapters and conferences proceedings. The search was performed for English language documents. 100 pertinent papers were found. Number of subjects involved in the journal studies ranges from 10 to 502, age from 16 to 86, male and female subjects are generally present. Number of analysed leads varies as well as the recording conditions. Identification performance differs widely as well as verification rate. Many studies refer to publicly available databases (Physionet ECG databases repository) while others rely on proprietary recordings making difficult them to compare. As a measure of overall accuracy we computed a weighted average of the identification rate and equal error rate in authentication scenarios. Identification rate resulted equal to 94.95 % while the equal error rate equal to 0.92 %. Biometric recognition is a mature field of research. Nevertheless, the use of physiological signals features, such as the ECG traits, needs further improvements. ECG features have the potential to be used in daily activities such as access control and patient handling as well as in wearable electronics applications. However, some barriers still limit its growth. Further analysis should be addressed on the use of single lead recordings and the study of features which are not dependent on the recording sites (e.g. fingers, hand palms). Moreover, it is expected that new techniques will be developed using fiducials and non-fiducial based features in order to catch the best of both approaches. ECG recognition in pathological subjects is also worth of additional investigations.

  19. Close-form expression of one-tap normalized LMS carrier phase recovery in optical communication systems

    NASA Astrophysics Data System (ADS)

    Xu, Tianhua; Jacobsen, Gunnar; Popov, Sergei; Li, Jie; Liu, Tiegen; Zhang, Yimo

    2016-10-01

    The performance of long-haul high speed coherent optical fiber communication systems is significantly degraded by the laser phase noise and the equalization enhanced phase noise (EEPN). In this paper, the analysis of the one-tap normalized least-mean-square (LMS) carrier phase recovery (CPR) is carried out and the close-form expression is investigated for quadrature phase shift keying (QPSK) coherent optical fiber communication systems, in compensating both laser phase noise and equalization enhanced phase noise. Numerical simulations have also been implemented to verify the theoretical analysis. It is found that the one-tap normalized least-mean-square algorithm gives the same analytical expression for predicting CPR bit-error-rate (BER) floors as the traditional differential carrier phase recovery, when both the laser phase noise and the equalization enhanced phase noise are taken into account.

  20. Gigabit free-space multi-level signal transmission with a mid-infrared quantum cascade laser operating at room temperature.

    PubMed

    Pang, Xiaodan; Ozolins, Oskars; Schatz, Richard; Storck, Joakim; Udalcovs, Aleksejs; Navarro, Jaime Rodrigo; Kakkar, Aditya; Maisons, Gregory; Carras, Mathieu; Jacobsen, Gunnar; Popov, Sergei; Lourdudoss, Sebastian

    2017-09-15

    Gigabit free-space transmissions are experimentally demonstrated with a quantum cascaded laser (QCL) emitting at mid-wavelength infrared of 4.65 μm, and a commercial infrared photovoltaic detector. The QCL operating at room temperature is directly modulated using on-off keying and, for the first time, to the best of our knowledge, four- and eight-level pulse amplitude modulations (PAM-4, PAM-8). By applying pre- and post-digital equalizations, we achieve up to 3  Gbit/s line data rate in all three modulation configurations with a bit error rate performance of below the 7% overhead hard decision forward error correction limit of 3.8×10 -3 . The proposed transmission link also shows a stable operational performance in the lab environment.

  1. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Study of a co-designed decision feedback equalizer, deinterleaver, and decoder

    NASA Technical Reports Server (NTRS)

    Peile, Robert E.; Welch, Loyd

    1990-01-01

    A technique that promises better quality data from band limited channels at lower received power in digital transmission systems is presented. Data transmission, in such systems often suffers from intersymbol interference (ISI) and noise. Two separate techniques, channel coding and equalization, have caused considerable advances in the state of communication systems and both concern themselves with removing the undesired effects of a communication channel. Equalizers mitigate the ISI whereas coding schemes are used to incorporate error-correction. In the past, most of the research in these two areas has been carried out separately. However, the individual techniques have strengths and weaknesses that are complementary in many applications: an integrated approach realizes gains in excess to that of a simple juxtaposition. Coding schemes have been successfully used in cascade with linear equalizers which in the absence of ISI provide excellent performance. However, when both ISI and the noise level are relatively high, nonlinear receivers like the decision feedback equalizer (DFE) perform better. The DFE has its drawbacks: it suffers from error propagation. The technique presented here takes advantage of interleaving to integrate the two approaches so that the error propagation in DFE can be reduced with the help of error correction provided by the decoder. The results of simulations carried out for both, binary, and non-binary, channels confirm that significant gain can be obtained by codesigning equalizer and decoder. Although, systems with time-invariant channels and simple DFE having linear filters were looked into, the technique is fairly general and can easily be modified for more sophisticated equalizers to obtain even larger gains.

  3. Linear time-dependent reference intervals where there is measurement error in the time variable-a parametric approach.

    PubMed

    Gillard, Jonathan

    2015-12-01

    This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects. © The Author(s) 2011.

  4. Emergency Multiengine Aircraft System for Lateral Control Using Differential Thrust Control of Wing Engines

    NASA Technical Reports Server (NTRS)

    Burken, John J. (Inventor); Burcham, Frank W., Jr. (Inventor); Bull, John (Inventor)

    2000-01-01

    Development of an emergency flight control system is disclosed for lateral control using only differential engine thrust modulation of multiengine aircraft is currently underway. The multiengine has at least two engines laterally displaced to the left and right from the axis of the aircraft. In response to a heading angle command psi(sub c) is to be tracked. By continually sensing the heading angle psi of the aircraft and computing a heading error signal psi(sub e) as a function of the difference between the heading angle command psi(sub c) and the sensed heading angle psi, a track control signal is developed with compensation as a function of sensed bank angle phi. Bank angle rate phi, or roll rate p, yaw rate tau, and true velocity produce an aircraft thrust control signal ATC(sub psi(L,R)). The thrust control signal is differentially applied to the left and right engines, with equal amplitude and opposite sign, such that a negative sign is applied to the control signal on the side of the aircraft. A turn is required to reduce the error signal until the heading feedback reduces the error to zero.

  5. Binarization of apodizers by adapted one-dimensional error diffusion method

    NASA Astrophysics Data System (ADS)

    Kowalczyk, Marek; Cichocki, Tomasz; Martinez-Corral, Manuel; Andres, Pedro

    1994-10-01

    Two novel algorithms for the binarization of continuous rotationally symmetric real positive pupil filters are presented. Both algorithms are based on 1-D error diffusion concept. The original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the pupils with equal width zones give Fraunhofer diffraction pattern more similar to that of the original continuous-tone pupil than those with equal area zones, assuming in both cases the same resolution limit of printing device.

  6. Advanced digital signal processing for short-haul and access network

    NASA Astrophysics Data System (ADS)

    Zhang, Junwen; Yu, Jianjun; Chi, Nan

    2016-02-01

    Digital signal processing (DSP) has been proved to be a successful technology recently in high speed and high spectrum-efficiency optical short-haul and access network, which enables high performances based on digital equalizations and compensations. In this paper, we investigate advanced DSP at the transmitter and receiver side for signal pre-equalization and post-equalization in an optical access network. A novel DSP-based digital and optical pre-equalization scheme has been proposed for bandwidth-limited high speed short-distance communication system, which is based on the feedback of receiver-side adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi-modulus algorithms (CMA, MMA). Based on this scheme, we experimentally demonstrate 400GE on a single optical carrier based on the highest ETDM 120-GBaud PDM-PAM-4 signal, using one external modulator and coherent detection. A line rate of 480-Gb/s is achieved, which enables 20% forward-error correction (FEC) overhead to keep the 400-Gb/s net information rate. The performance after fiber transmission shows large margin for both short range and metro/regional networks. We also extend the advanced DSP for short haul optical access networks by using high order QAMs. We propose and demonstrate a high speed multi-band CAP-WDM-PON system on intensity modulation, direct detection and digital equalizations. A hybrid modified cascaded MMA post-equalization schemes are used to equalize the multi-band CAP-mQAM signals. Using this scheme, we successfully demonstrates 550Gb/s high capacity WDMPON system with 11 WDM channels, 55 sub-bands, and 10-Gb/s per user in the downstream over 40-km SMF.

  7. Experimental demonstration of a frequency-domain Volterra series nonlinear equalizer in polarization-multiplexed transmission.

    PubMed

    Guiomar, Fernando P; Reis, Jacklyn D; Carena, Andrea; Bosco, Gabriella; Teixeira, António L; Pinto, Armando N

    2013-01-14

    Employing 100G polarization-multiplexed quaternary phase-shift keying (PM-QPSK) signals, we experimentally demonstrate a dual-polarization Volterra series nonlinear equalizer (VSNE) applied in frequency-domain, to mitigate intra-channel nonlinearities. The performance of the dual-polarization VSNE is assessed in both single-channel and in wavelength-division multiplexing (WDM) scenarios, providing direct comparisons with its single-polarization version and with the widely studied back-propagation split-step Fourier (SSF) approach. In single-channel transmission, the optimum power has been increased by about 1 dB, relatively to the single-polarization equalizers, and up to 3 dB over linear equalization, with a corresponding bit error rate (BER) reduction of up to 63% and 85%, respectively. Despite of the impact of inter-channel nonlinearities, we show that intra-channel nonlinear equalization is still able to provide approximately 1 dB improvement in the optimum power and a BER reduction of ~33%, considering a 66 GHz WDM grid. By means of simulation, we demonstrate that the performance of nonlinear equalization can be substantially enhanced if both optical and electrical filtering are optimized, enabling the VSNE technique to outperform its SSF counterpart at high input powers.

  8. Multiple point least squares equalization in a room

    NASA Technical Reports Server (NTRS)

    Elliott, S. J.; Nelson, P. A.

    1988-01-01

    Equalization filters designed to minimize the mean square error between a delayed version of the original electrical signal and the equalized response at a point in a room have previously been investigated. In general, such a strategy degrades the response at positions in a room away from the equalization point. A method is presented for designing an equalization filter by adjusting the filter coefficients to minimize the sum of the squares of the errors between the equalized responses at multiple points in the room and delayed versions of the original, electrical signal. Such an equalization filter can give a more uniform frequency response over a greater volume of the enclosure than can the single point equalizer above. Computer simulation results are presented of equalizing the frequency responses from a loudspeaker to various typical ear positions, in a room with dimensions and acoustic damping typical of a car interior, using the two approaches outlined above. Adaptive filter algorithms, which can automatically adjust the coefficients of a digital equalization filter to achieve this minimization, will also be discussed.

  9. Effect of twist on single-mode fiber-optic 3 × 3 couplers

    NASA Astrophysics Data System (ADS)

    Chen, Dandan; Ji, Minning; Peng, Lei

    2018-01-01

    In the fabricating process of a 3 × 3 fused tapered coupler, the three fibers are usually twisted to be close-contact. The effect of twist on 3 × 3 fused tapered couplers is investigated in this paper. It is found that though a linear 3 × 3 coupler may realize equal power splitting ratio theoretically by twisting a special angle, it is hard to be fabricated actually because the twist angle and the coupler's length must be determined in advance. While an equilateral 3 × 3 coupler can not only realize approximate equal power splitting ratio theoretically but can also be fabricated just by controlling the elongation length. The effect of twist on the equilateral 3 × 3 coupler lies in the relationship between the equal ratio error and the twist angle. The more the twist angle is, the larger the equal ratio error may be. The twist angle usually should be no larger than 90° on one coupling period length in order to keep the equal ratio error small enough. The simulation results agree well with the experimental data.

  10. Explaining errors in children's questions.

    PubMed

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  11. Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches

    NASA Technical Reports Server (NTRS)

    Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.

  12. Spaceflight Ka-Band High-Rate Radiation-Hard Modulator

    NASA Technical Reports Server (NTRS)

    Jaso, Jeffery M.

    2011-01-01

    A document discusses the creation of a Ka-band modulator developed specifically for the NASA/GSFC Solar Dynamics Observatory (SDO). This flight design consists of a high-bandwidth, Quadriphase Shift Keying (QPSK) vector modulator with radiation-hardened, high-rate driver circuitry that receives I and Q channel data. The radiationhard design enables SDO fs Ka-band communications downlink system to transmit 130 Mbps (300 Msps after data encoding) of science instrument data to the ground system continuously throughout the mission fs minimum life of five years. The low error vector magnitude (EVM) of the modulator lowers the implementation loss of the transmitter in which it is used, thereby increasing the overall communication system link margin. The modulator comprises a component within the SDO transmitter, and meets the following specifications over a 0 to 40 C operational temperature range: QPSK/OQPSK modulator, 300-Msps symbol rate, 26.5-GHz center frequency, error vector magnitude less than or equal to 10 percent rms, and compliance with the NTIA (National Telecommunications and Information Administration) spectral mask.

  13. High-speed phosphor-LED wireless communication system utilizing no blue filter

    NASA Astrophysics Data System (ADS)

    Yeh, C. H.; Chow, C. W.; Chen, H. Y.; Chen, J.; Liu, Y. L.; Wu, Y. F.

    2014-09-01

    In this paper, we propose and investigate an adaptively 84.44 to 190 Mb/s phosphor-LED visible light communication (VLC) system at a practical transmission distance. Here, we utilize the orthogonal-frequency-division-multiplexing quadrature-amplitude-modulation (OFDM-QAM) modulation with power/bit-loading algorithm in proposed VLC system. In the experiment, the optimal analogy pre-equalization design is also performed at LED-Tx side and no blue filter is used at the Rx side for extending the modulation bandwidth from 1 MHz to 30 MHz. In addition, the corresponding free space transmission lengths are between 75 cm and 2 m under various data rates of proposed VLC. And the measured bit error rates (BERs) of < 3.8×10-3 [forward error correction (FEC) limit] at different transmission lengths and measured data rates can be also obtained. Finally, we believe that our proposed scheme could be another alternative VLC implementation in practical distance, supporting < 100 Mb/s, using commercially available LED and PD (without optical blue filtering) and compact size.

  14. Iterative Overlap FDE for Multicode DS-CDMA

    NASA Astrophysics Data System (ADS)

    Takeda, Kazuaki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Recently, a new frequency-domain equalization (FDE) technique, called overlap FDE, that requires no GI insertion was proposed. However, the residual inter/intra-block interference (IBI) cannot completely be removed. In addition to this, for multicode direct sequence code division multiple access (DS-CDMA), the presence of residual interchip interference (ICI) after FDE distorts orthogonality among the spreading codes. In this paper, we propose an iterative overlap FDE for multicode DS-CDMA to suppress both the residual IBI and the residual ICI. In the iterative overlap FDE, joint minimum mean square error (MMSE)-FDE and ICI cancellation is repeated a sufficient number of times. The bit error rate (BER) performance with the iterative overlap FDE is evaluated by computer simulation.

  15. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  16. Experimental demonstration of a real-time PAM-4 Q-band RoF system based on CMMA equalization and interleaved RS code

    NASA Astrophysics Data System (ADS)

    Deng, Rui; Yu, Jianjun; He, Jing; Wei, Yiran

    2018-05-01

    In this paper, we experimentally demonstrated a complete real-time 4-level pulse amplitude modulation (PAM-4) Q-band radio-over-fiber (RoF) system with optical heterodyning and envelope detector (ED) down-conversion. Meanwhile, a cost-efficient real-time implementation scheme of cascaded multi-modulus algorithm (CMMA) equalization is proposed in this paper. By using the proposed scheme, the CMMA equalization is applied in the system for signal recovery. In addition, to improve the transmission performance of the system, an interleaved Reed-Solomon (RS) code is applied in the real-time system. Although there is serious power impulse noise in the system, the system can still achieve a bit error rate (BER) at below 1 × 10-7 after 25 km standard single mode fiber (SSMF) transmission and 1-m wireless transmission.

  17. On the Application of Time-Reversed Space-Time Block Code to Aeronautical Telemetry

    DTIC Science & Technology

    2014-06-01

    Keying (SOQPSK), bit error rate (BER), Orthogonal Frequency Division Multiplexing ( OFDM ), Generalized time-reversed space-time block codes (GTR-STBC) 16...Alamouti code [4]) is optimum [2]. Although OFDM is generally applied on a per subcarrier basis in frequency selective fading, it is not a viable...Calderbank, “Finite-length MIMO decision feedback equal- ization for space-time block-coded signals over multipath-fading channels,” IEEE Transac- tions on

  18. Constrained independent component analysis approach to nonobtrusive pulse rate measurements

    NASA Astrophysics Data System (ADS)

    Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  19. Investigation of adaptive filtering and MDL mitigation based on space-time block-coding for spatial division multiplexed coherent receivers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi

    2017-07-01

    In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).

  20. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  1. Demodulation Algorithms for the Ofdm Signals in the Time- and Frequency-Scattering Channels

    NASA Astrophysics Data System (ADS)

    Bochkov, G. N.; Gorokhov, K. V.; Kolobkov, A. V.

    2016-06-01

    We consider a method based on the generalized maximum-likelihood rule for solving the problem of reception of the signals with orthogonal frequency division multiplexing of their harmonic components (OFDM signals) in the time- and frequency-scattering channels. The coherent and incoherent demodulators effectively using the time scattering due to the fast fading of the signal are developed. Using computer simulation, we performed comparative analysis of the proposed algorithms and well-known signal-reception algorithms with equalizers. The proposed symbolby-symbol detector with decision feedback and restriction of the number of searched variants is shown to have the best bit-error-rate performance. It is shown that under conditions of the limited accuracy of estimating the communication-channel parameters, the incoherent OFDMsignal detectors with differential phase-shift keying can ensure a better bit-error-rate performance compared with the coherent OFDM-signal detectors with absolute phase-shift keying.

  2. Constrained independent component analysis approach to nonobtrusive pulse rate measurements.

    PubMed

    Tsouri, Gill R; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  3. Method and apparatus for communicating computer data from one point to another over a communications medium

    DOEpatents

    Arneson, Michael R [Chippewa Falls, WI; Bowman, Terrance L [Sumner, WA; Cornett, Frank N [Chippewa Falls, WI; DeRyckere, John F [Eau Claire, WI; Hillert, Brian T [Chippewa Falls, WI; Jenkins, Philip N [Eau Claire, WI; Ma, Nan [Chippewa Falls, WI; Placek, Joseph M [Chippewa Falls, WI; Ruesch, Rodney [Eau Claire, WI; Thorson, Gregory M [Altoona, WI

    2007-07-24

    The present invention is directed toward a communications channel comprising a link level protocol, a driver, a receiver, and a canceller/equalizer. The link level protocol provides logic for DC-free signal encoding and recovery as well as supporting many features including CRC error detection and message resend to accommodate infrequent bit errors across the medium. The canceller/equalizer provides equalization for destabilized data signals and also provides simultaneous bi-directional data transfer. The receiver provides bit deskewing by removing synchronization error, or skewing, between data signals. The driver provides impedance controlling by monitoring the characteristics of the communications medium, like voltage or temperature, and providing a matching output impedance in the signal driver so that fewer distortions occur while the data travels across the communications medium.

  4. Assessment of Metronidazole Susceptibility in Helicobacter pylori: Statistical Validation and Error Rate Analysis of Breakpoints Determined by the Disk Diffusion Test

    PubMed Central

    Chaves, Sandra; Gadanho, Mário; Tenreiro, Rogério; Cabrita, José

    1999-01-01

    Metronidazole susceptibility of 100 Helicobacter pylori strains was assessed by determining the inhibition zone diameters by disk diffusion test and the MICs by agar dilution and PDM Epsilometer test (E test). Linear regression analysis was performed, allowing the definition of significant linear relations, and revealed correlations of disk diffusion results with both E-test and agar dilution results (r2 = 0.88 and 0.81, respectively). No significant differences (P = 0.84) were found between MICs defined by E test and those defined by agar dilution, taken as a standard. Reproducibility comparison between E-test and disk diffusion tests showed that they are equivalent and with good precision. Two interpretative susceptibility schemes (with or without an intermediate class) were compared by an interpretative error rate analysis method. The susceptibility classification scheme that included the intermediate category was retained, and breakpoints were assessed for diffusion assay with 5-μg metronidazole disks. Strains with inhibition zone diameters less than 16 mm were defined as resistant (MIC > 8 μg/ml), those with zone diameters equal to or greater than 16 mm but less than 21 mm were considered intermediate (4 μg/ml < MIC ≤ 8 μg/ml), and those with zone diameters of 21 mm or greater were regarded as susceptible (MIC ≤ 4 μg/ml). Error rate analysis applied to this classification scheme showed occurrence frequencies of 1% for major errors and 7% for minor errors, when the results were compared to those obtained by agar dilution. No very major errors were detected, suggesting that disk diffusion might be a good alternative for determining the metronidazole sensitivity of H. pylori strains. PMID:10203543

  5. On codes with multi-level error-correction capabilities

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1987-01-01

    In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.

  6. An improved adaptive interpolation clock recovery loop based on phase splitting algorithm for coherent optical communication system

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya

    2018-01-01

    Traditional clock recovery scheme achieves timing adjustment by digital interpolation, thus recovering the sampling sequence. Based on this, an improved clock recovery architecture joint channel equalization for coherent optical communication system is presented in this paper. The loop is different from the traditional clock recovery. In order to reduce the interpolation error caused by the distortion in the frequency domain of the interpolator and to suppress the spectral mirroring generated by the sampling rate change, the proposed algorithm joint equalization, improves the original interpolator in the loop, along with adaptive filtering, and makes error compensation for the original signals according to the balanced pre-filtering signals. Then the signals are adaptive interpolated through the feedback loop. Furthermore, the phase splitting timing recovery algorithm is adopted in this paper. The time error is calculated according to the improved algorithm when there is no transition between the adjacent symbols, making calculated timing error more accurate. Meanwhile, Carrier coarse synchronization module is placed before the beginning of timing recovery to eliminate the larger frequency offset interference, which effectively adjust the sampling clock phase. In this paper, the simulation results show that the timing error is greatly reduced after the loop is changed. Based on the phase splitting algorithm, the BER and MSE are better than those in the unvaried architecture. In the fiber channel, using MQAM modulation format, after 100 km-transmission of single-mode fiber, especially when ROF(roll-off factor) values tends to 0, the algorithm shows a better clock performance under different ROFs. When SNR values are less than 8, the BER could achieve 10-2 to 10-1 magnitude. Furthermore, the proposed timing recovery is more suitable for the situation with low SNR values.

  7. Stratospheric Observations of CH3D and HDO from ATMOS Infrared Solar Spectra: Enrichments of Deuterium in Methane and Implications for HD

    NASA Technical Reports Server (NTRS)

    Irion, F. W.; Moyer, E. J.; Gunson, M. R.; Rinsland, C. P.; Yung, Y. L.; Michelsen, H. A.; Salawitch, R. J.; Chang, A. Y.; Newchurch, M. J.; Abbas, M. M.; hide

    1996-01-01

    Stratospheric mixing ratios of CH3D from 100 mb to 17mb (approximately equals 15 to 28 km)and HDO from 100 mb to 10 mb (approximately equals 15 to 32 km) have been inferred from high resolution solar occultation infrared spectra from the Atmospheric Trace MOlecule Spectroscopy (ATMOS) Fourier-transform interferometer. The spectra, taken on board the Space Shuttle during the Spacelab 3 and ATLAS-1, -2, and -3 missions, extend in latitude from 70 deg S to 65 deg N. We find CH3D entering the stratosphere at an average mixing ratio of (9.9 +/- 0.8) x 10(exp -10) with a D/H ratio in methane (7.1 +/- 7.4)% less than that in Standard Mean Ocean Water (SMOW) (1 sigma combined precision and systematic error). In the mid to lower stratosphere, the average lifetime of CH3D is found to be (1.19 +/- 0.02) times that of CH4, resulting in an increasing D/H ratio in methane as air 'ages' and the methane mixing ratio decreases. We find an average of (1.0 +/- 0.1) molecules of stratospheric HDO are produced for each CH3D destroyed (1 sigma combined precision and systematic error), indicating that the rate of HDO production is approximately equal to the rate of CH3D destruction. Assuming negligible amounts of deuterium in species other than HDO, CH3D and HD, this limits the possible change in the stratospheric HD mixing ratio below about 10 mb to be +/- 0.1 molecules HD created per molecule CH3D destroyed.

  8. Use of scan overlap redundancy to enhance multispectral aircraft scanner data

    NASA Technical Reports Server (NTRS)

    Lindenlaub, J. C.; Keat, J.

    1973-01-01

    Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.

  9. Adjusting for multiple prognostic factors in the analysis of randomised trials

    PubMed Central

    2013-01-01

    Background When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large sample sizes, however treating strata as random effects should be the analysis method of choice with binary or time-to-event outcomes and a small sample size. PMID:23898993

  10. One-dimensional error-diffusion technique adapted for binarization of rotationally symmetric pupil filters

    NASA Astrophysics Data System (ADS)

    Kowalczyk, Marek; Martínez-Corral, Manuel; Cichocki, Tomasz; Andrés, Pedro

    1995-02-01

    Two novel algorithms for the binarization of continuous rotationally symmetric real and positive pupil filters are presented. Both algorithms are based on the one-dimensional error diffusion concept. In our numerical experiment an original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the filter with equal width zones gives Fraunhofer diffraction pattern more similar to that of the original gray-tone apodizer than that with equal area zones, assuming in both cases the same resolution limit of device used to print both filters.

  11. Experimental demonstration of an efficient hybrid equalizer for short-reach optical SSB systems

    NASA Astrophysics Data System (ADS)

    Zhu, Mingyue; Ying, Hao; Zhang, Jing; Yi, Xingwen; Qiu, Kun

    2018-02-01

    We propose an efficient enhanced hybrid equalizer combining the feed forward equalization (FFE) with a modified Volterra filter to mitigate the linear and nonlinear interference for the short-reach optical single side-band (SSB) system. The optical SSB signal is generated by a relatively low-cost dual-drive Mach-Zehnder modulator (DDMZM). The two driving signals are a pair of Hilbert signals with Nyquist pulse-shaped four-level pulse amplitude modulation (NPAM-4). After the fiber transmission, the neighboring received symbols are strongly correlated due to the pulse spreading in time domain caused by the chromatic dispersion (CD). At the receiver equalization stage, the FFE followed by higher order terms of modified Volterra filter, which utilizes the forward and backward neighboring symbols to construct the kernels with strong correlation, are used as an enhanced hybrid equalizer to mitigate the inter symbol interference (ISI) and nonlinear distortion due to the interaction of the CD and the square-law detection. We experimentally demonstrate that the optical SSB NPAM-4 signal of 40 Gb/s transmitting over 80 km standard single mode fiber (SSMF) with a bit-error-rate (BER) of 7 . 59 × 10-4.

  12. Recovery from unusual attitudes: HUD vs. back-up display in a static F/A-18 simulator.

    PubMed

    Huber, Samuel W

    2006-04-01

    Spatial disorientation (SD) remains one of the most important causes of fatal fighter aircraft accidents. The aim of this study was to give a recommendation for the use of the head-up display (HUD) or back-up attitude directional indicator (ADI) in a state of spatial disorientation based on the respective performance in an unusual attitude recovery task. Seven fighter pilots joining a conversion course to the F/A-18 participated in this study. Flight time will be presented as range (and mean in parentheses). Total military flight experience of the subjects was 835-1759 h (1412 h). Flight time on the F/A-18 was 41-123 h (70 h). The study was performed in a fixed base F/A-18D Weapons Tactics Trainer. We tested the recovery from 11 unusual attitudes and analyzed decision time (DT), total recovery time (TRT), and error rates for the HUD or the back-up ADI. We found no differences regarding either reaction times or error rates. For the HUD we found a DT (mean +/- SD) of 1.3 +/- 0.4 s, a TRT of 9.1 +/- 4.1 s, and an error rate of 29%. For the ADI the respective values were a DT of 1.4 +/- 0.4 s, a TRT of 8.3 +/- 3.8 s, and an error rate of 27%. Unusual attitude recoveries are performed equally well using the HUD or the back-up ADI. Switching from one instrument to the other during recovery should be avoided since it would probably result in a loss of time without benefit.

  13. Using sediment 'fingerprints' to assess sediment-budget errors, north Halawa Valley, Oahu, Hawaii, 1991-92

    USGS Publications Warehouse

    Hill, B.R.; DeCarlo, E.H.; Fuller, C.C.; Wong, M.F.

    1998-01-01

    Reliable estimates of sediment-budget errors are important for interpreting sediment-budget results. Sediment-budget errors are commonly considered equal to sediment-budget imbalances, which may underestimate actual sediment-budget errors if they include compensating positive and negative errors. We modified the sediment 'fingerprinting' approach to qualitatively evaluate compensating errors in an annual (1991) fine (<63 ??m) sediment budget for the North Halawa Valley, a mountainous, forested drainage basin on the island of Oahu, Hawaii, during construction of a major highway. We measured concentrations of aeolian quartz and 137Cs in sediment sources and fluvial sediments, and combined concentrations of these aerosols with the sediment budget to construct aerosol budgets. Aerosol concentrations were independent of the sediment budget, hence aerosol budgets were less likely than sediment budgets to include compensating errors. Differences between sediment-budget and aerosol-budget imbalances therefore provide a measure of compensating errors in the sediment budget. The sediment-budget imbalance equalled 25% of the fluvial fine-sediment load. Aerosol-budget imbalances were equal to 19% of the fluvial 137Cs load and 34% of the fluval quartz load. The reasonably close agreement between sediment- and aerosol-budget imbalances indicates that compensating errors in the sediment budget were not large and that the sediment-budget imbalance as a reliable measure of sediment-budget error. We attribute at least one-third of the 1991 fluvial fine-sediment load to highway construction. Continued monitoring indicated that highway construction produced 90% of the fluvial fine-sediment load during 1992. Erosion of channel margins and attrition of coarse particles provided most of the fine sediment produced by natural processes. Hillslope processes contributed relatively minor amounts of sediment.

  14. Brain fingerprinting classification concealed information test detects US Navy military medical information with P300

    PubMed Central

    Farwell, Lawrence A.; Richardson, Drew C.; Richardson, Graham M.; Furedy, John J.

    2014-01-01

    A classification concealed information test (CIT) used the “brain fingerprinting” method of applying P300 event-related potential (ERP) in detecting information that is (1) acquired in real life and (2) unique to US Navy experts in military medicine. Military medicine experts and non-experts were asked to push buttons in response to three types of text stimuli. Targets contain known information relevant to military medicine, are identified to subjects as relevant, and require pushing one button. Subjects are told to push another button to all other stimuli. Probes contain concealed information relevant to military medicine, and are not identified to subjects. Irrelevants contain equally plausible, but incorrect/irrelevant information. Error rate was 0%. Median and mean statistical confidences for individual determinations were 99.9% with no indeterminates (results lacking sufficiently high statistical confidence to be classified). We compared error rate and statistical confidence for determinations of both information present and information absent produced by classification CIT (Is a probe ERP more similar to a target or to an irrelevant ERP?) vs. comparison CIT (Does a probe produce a larger ERP than an irrelevant?) using P300 plus the late negative component (LNP; together, P300-MERMER). Comparison CIT produced a significantly higher error rate (20%) and lower statistical confidences: mean 67%; information-absent mean was 28.9%, less than chance (50%). We compared analysis using P300 alone with the P300 + LNP. P300 alone produced the same 0% error rate but significantly lower statistical confidences. These findings add to the evidence that the brain fingerprinting methods as described here provide sufficient conditions to produce less than 1% error rate and greater than 95% median statistical confidence in a CIT on information obtained in the course of real life that is characteristic of individuals with specific training, expertise, or organizational affiliation. PMID:25565941

  15. Weighted finite impulse response filter for chromatic dispersion equalization in coherent optical fiber communication systems

    NASA Astrophysics Data System (ADS)

    Zeng, Ziyi; Yang, Aiying; Guo, Peng; Feng, Lihui

    2018-01-01

    Time-domain CD equalization using finite impulse response (FIR) filter is now a common approach for coherent optical fiber communication systems. The complex weights of FIR taps are calculated from a truncated impulse response of the CD transfer function, and the modulus of the complex weights is constant. In our work, we take the limited bandwidth of a single channel signal into account and propose weighted FIRs to improve the performance of CD equalization. The key in weighted FIR filters is the selection and optimization of weighted functions. In order to present the performance of different types of weighted FIR filters, a square-root raised cosine FIR (SRRC-FIR) and a Gaussian FIR (GS-FIR) are investigated. The optimization of square-root raised cosine FIR and Gaussian FIR are made in term of the bit rate error (BER) of QPSK and 16QAM coherent detection signal. The results demonstrate that the optimized parameters of the weighted filters are independent of the modulation format, symbol rate and the length of transmission fiber. With the optimized weighted FIRs, the BER of CD equalization signal is decreased significantly. Although this paper has investigated two types of weighted FIR filters, i.e. SRRC-FIR filter and GS-FIR filter, the principle of weighted FIR can also be extended to other symmetric functions super Gaussian function, hyperbolic secant function and etc.

  16. Uncertainty in biological monitoring: a framework for data collection and analysis to account for multiple sources of sampling bias

    USGS Publications Warehouse

    Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.

    2016-01-01

    Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for occurrence patterns using data from a citizen-science monitoring programme. However, our approach is applicable to data generated by any type of research and monitoring programme, independent of skill level or scale, when effort is placed on obtaining auxiliary information on false-positive rates.

  17. Critical Findings: Attempts at Reducing Notification Errors.

    PubMed

    Shahriari, Mona; Liu, Li; Yousem, David M

    2016-11-01

    Ineffective communication of critical findings (CFs) is a patient safety issue. The aim of this study was to assess whether a feedback program for faculty members failing to correctly report CFs would lead to improved compliance. Fifty randomly selected reports were reviewed by the chief of neuroradiology each month for 42 months. Errors included (1) not calling for a CF, (2) not identifying a CF as such, (3) mischaracterizing non-CFs as CFs, and (4) calling for non-CFs. The number of appropriately handled and mishandled reports in each month was recorded. The trend of error reduction after the division chief provided feedback in the subsequent months was evaluated, and the equality of time interval between errors was tested. Among 2,100 reports, 49 (2.3%) were handled inappropriately. Among non-CF reports, 98.97% (1,817 of 1,836) were appropriately not called and not flagged, and 88.64% (234 of 264) of CF reports were called and flagged appropriately. The error rate during the 11th through 32nd months of review (1.28%) was significantly lower than the error rate in the first 10 months of review (3.98%) (P = .001). This benefit lasted for 21 months. Review and giving feedback to radiologists increased their compliance with the CF protocol and decreased deviations from standard operating procedures for about 2 years (from month 10 to month 32). Developing new ideas for improving CF policy compliance may be required at 2- to 3-year intervals to provide continuous quality improvement. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  18. Demonstration of DFTS-OFDM and equalization technology using in VLC communication on the headset port of Android device

    NASA Astrophysics Data System (ADS)

    Wang, Fumin; Shi, Meng; Chi, Nan

    2016-10-01

    Visible light communication (VLC) is one of the hottest research directions in wireless communication. It is safe, fast and free of electromagnetic interference. We carry out the visible light communication using DFTS-OFDM modulation mode through the headset port to and take equalization technique to compensate the channel. In this paper, we first test the feasibility of the DFTS-OFDM modulated VLC system by analyzing the constellation and the transmission error rate via the headset interface of the smartphone. Then we change the peak value of the signal generated by the AWG as well as the static current to find the best working point. We tested the effect of changing the up-sample number on the BER performance of the communication system, and compared the BER performance of 16QAM and 8QAM modulation in different equalization method. We also do experiment to find how distance affect the performance of the communication and the maximum communication rate that can be achieved. We successfully demonstrated a visible light communication system detected by a headset port of a smart phone for a 32QAM DFTS-OFDM modulated signal of 27.5kb/s over a 3-meter free space transmission. The light source is traditional phosphorescent white LED. This result, as far as we know, is the highest data rate of VLC system via headset port detection.

  19. Pilots Rate Augmented Generalized Predictive Control for Reconfiguration

    NASA Technical Reports Server (NTRS)

    Soloway, Don; Haley, Pam

    2004-01-01

    The objective of this paper is to report the results from the research being conducted in reconfigurable fight controls at NASA Ames. A study was conducted with three NASA Dryden test pilots to evaluate two approaches of reconfiguring an aircraft's control system when failures occur in the control surfaces and engine. NASA Ames is investigating both a Neural Generalized Predictive Control scheme and a Neural Network based Dynamic Inverse controller. This paper highlights the Predictive Control scheme where a simple augmentation to reduce zero steady-state error led to the neural network predictor model becoming redundant for the task. Instead of using a neural network predictor model, a nominal single point linear model was used and then augmented with an error corrector. This paper shows that the Generalized Predictive Controller and the Dynamic Inverse Neural Network controller perform equally well at reconfiguration, but with less rate requirements from the actuators. Also presented are the pilot ratings for each controller for various failure scenarios and two samples of the required control actuation during reconfiguration. Finally, the paper concludes by stepping through the Generalized Predictive Control's reconfiguration process for an elevator failure.

  20. Hybrid time-frequency domain equalization based on sign-sign joint decision multimodulus algorithm for 6 × 6 mode division multiplexing system

    NASA Astrophysics Data System (ADS)

    Li, Jiao; Hu, Guijun; Gong, Caili; Li, Li

    2018-02-01

    In this paper, we propose a hybrid time-frequency domain sign-sign joint decision multimodulus algorithm (Hybrid-SJDMMA) for mode-demultiplexing in a 6 × 6 mode division multiplexing (MDM) system with high-order QAM modulation. The equalization performance of Hybrid-SJDMMA was evaluated and compared with the frequency domain multimodulus algorithm (FD-MMA) and the hybrid time-frequency domain sign-sign multimodulus algorithm (Hybrid-SMMA). Simulation results revealed that Hybrid-SJDMMA exhibits a significantly lower computational complexity than FD-MMA, and its convergence speed is similar to that of FD-MMA. Additionally, the bit-error-rate performance of Hybrid-SJDMMA was obviously better than FD-MMA and Hybrid-SMMA for 16 QAM and 64 QAM.

  1. Dependency of Optimal Parameters of the IRIS Template on Image Quality and Border Detection Error

    NASA Astrophysics Data System (ADS)

    Matveev, I. A.; Novik, V. P.

    2017-05-01

    Generation of a template containing spatial-frequency features of iris is an important stage of identification. The template is obtained by a wavelet transform in an image region specified by iris borders. One of the main characteristics of the identification system is the value of recognition error, equal error rate (EER) is used as criterion here. The optimal values (in sense of minimizing the EER) of wavelet transform parameters depend on many factors: image quality, sharpness, size of characteristic objects, etc. It is hard to isolate these factors and their influences. The work presents an attempt to study an influence of following factors to EER: iris segmentation precision, defocus level, noise level. Several public domain iris image databases were involved in experiments. The images were subjected to modelled distortions of said types. The dependencies of wavelet parameter and EER values from the distortion levels were build. It is observed that the increase of the segmentation error and image noise leads to the increase of the optimal wavelength of the wavelets, whereas the increase of defocus level leads to decreasing of this value.

  2. The relevance of error analysis in graphical symbols evaluation.

    PubMed

    Piamonte, D P

    1999-01-01

    In an increasing number of modern tools and devices, small graphical symbols appear simultaneously in sets as parts of the human-machine interfaces. The presence of each symbol can influence the other's recognizability and correct association to its intended referents. Thus, aside from correct associations, it is equally important to perform certain error analysis of the wrong answers, misses, confusions, and even lack of answers. This research aimed to show how such error analyses could be valuable in evaluating graphical symbols especially across potentially different user groups. The study tested 3 sets of icons representing 7 videophone functions. The methods involved parameters such as hits, confusions, missing values, and misses. The association tests showed similar hit rates of most symbols across the majority of the participant groups. However, exploring the error patterns helped detect differences in the graphical symbols' performances between participant groups, which otherwise seemed to have similar levels of recognition. These are very valuable not only in determining the symbols to be retained, replaced or re-designed, but also in formulating instructions and other aids in learning to use new products faster and more satisfactorily.

  3. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  4. Packet error rate analysis of decode-and-forward free-space optical cooperative networks in the presence of random link blockage

    NASA Astrophysics Data System (ADS)

    Zdravković, Nemanja; Cvetkovic, Aleksandra; Milic, Dejan; Djordjevic, Goran T.

    2017-09-01

    This paper analyses end-to-end packet error rate (PER) of a free-space optical decode-and-forward cooperative network over a gamma-gamma atmospheric turbulence channel in the presence of temporary random link blockage. Closed-form analytical expressions for PER are derived for the cases with and without transmission links being prone to blockage. Two cooperation protocols (denoted as 'selfish' and 'pilot-adaptive') are presented and compared, where the latter accounts for the presence of blockage and adapts transmission power. The influence of scintillation, link distance, average transmitted signal power, network topology and probability of an uplink and/or internode link being blocked are discussed when the destination applies equal gain combining. The results show that link blockage caused by obstacles can degrade system performance, causing an unavoidable PER floor. The implementation of the pilot-adaptive protocol improves performance when compared to the selfish protocol, diminishing internode link blockage and lowering the PER floor, especially for larger networks.

  5. Map projections for global and continental data sets and an analysis of pixel distortion caused by reprojection

    USGS Publications Warehouse

    Steinwand, Daniel R.; Hutchinson, John A.; Snyder, J.P.

    1995-01-01

    In global change studies the effects of map projection properties on data quality are apparent, and the choice of projection is significant. To aid compilers of global and continental data sets, six equal-area projections were chosen: the interrupted Goode Homolosine, the interrupted Mollweide, the Wagner IV, and the Wagner VII for global maps; the Lambert Azimuthal Equal-Area for hemisphere maps; and the Oblated Equal-Area and the Lambert Azimuthal Equal-Area for continental maps. Distortions in small-scale maps caused by reprojection, and the additional distortions incurred when reprojecting raster images, were quantified and graphically depicted. For raster images, the errors caused by the usual resampling methods (pixel brightness level interpolation) were responsible for much of the additional error where the local resolution and scale change were the greatest.

  6. Language function distribution in left-handers: A navigated transcranial magnetic stimulation study.

    PubMed

    Tussis, Lorena; Sollmann, Nico; Boeckh-Behrens, Tobias; Meyer, Bernhard; Krieg, Sandro M

    2016-02-01

    Recent studies suggest that in left-handers, the right hemisphere (RH) is more involved in language function when compared to right-handed subjects. Since data on lesion-based approaches is lacking, we aimed to investigate language distribution of left-handers by repetitive navigated transcranial magnetic stimulation (rTMS). Thus, rTMS was applied to the left hemisphere (LH) and RH in 15 healthy left-handers during an object-naming task, and resulting naming errors were categorized. Then, we calculated error rates (ERs=number of errors per number of stimulations) for both hemispheres separately and defined a laterality score as the quotient of the LH ER - RH ER through the LH ER + RH ER (abbreviated as (L-R)/(L+R)). In this context, (L-R)/(L+R)>0 indicates that the LH is dominant, whereas (L-R)/(L+R)<0 shows that the RH is dominant. No significant difference in ERs was found between hemispheres (all errors: mean LH 18.0±11.7%, mean RH 18.1±12.2%, p=0.94; all errors without hesitation: mean LH 12.4±9.8%, mean RH 12.9±10.0%, p=0.65; no responses: mean LH 9.3±9.2%, mean RH 11.5±10.3%, p=0.84). However, a significant difference between the results of (L-R)/(L+R) of left-handers and right-handers (source data of another study) for all errors (mean 0.01±0.14 vs. 0.19±0.20, p=0.0019) and all errors without hesitation (mean -0.02±0.20 vs. 0.19±0.28, p=0.0051) was revealed, whereas the comparison for no responses did not show a significant difference (mean: -0.004±0.27 vs. 0.09±0.44, p=0.64). Accordingly, left-handers present a comparatively equal language distribution across both hemispheres with language dominance being nearly equally distributed between hemispheres in contrast to right-handers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Measuring the Utility of a Cyber Incident Mission Impact Assessment (CIMIA) Process for Mission Assurance

    DTIC Science & Technology

    2011-03-01

    1.179 1 22 .289 POP-UP .000 1 22 .991 Tests the null hypothesis that the error variance of the dependent variable is equal across groups. a. Design ...POP-UP 2.104 1 22 .161 Tests the null hypothesis that the error variance of the dependent variable is equal across groups. a. Design : Intercept... design also limited the number of intended treatments. The experimental design originally was suppose to test all three adverse events that threaten

  8. General model for the pointing error analysis of Risley-prism system based on ray direction deviation in light refraction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing

    2016-09-01

    The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.

  9. 16QAM Blind Equalization via Maximum Entropy Density Approximation Technique and Nonlinear Lagrange Multipliers

    PubMed Central

    Mauda, R.; Pinchas, M.

    2014-01-01

    Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813

  10. Performance improvement of optical wireless communication through fog with a decision feedback equalizer.

    PubMed

    Aharonovich, Marius; Arnon, Shlomi

    2005-08-01

    Optical wireless communication (OWC) systems use the atmosphere as a propagation medium. However, a common problem is that from time to time moderate cloud and fog emerge between the receiver and the transmitter. These adverse weather conditions impose temporal broadening and power loss on the optical signal, which reduces the digital signal-to-noise ratio (DSNR), produces significant intersymbol interference (ISI), and degrades the communication system's bit error rate (BER) and throughput. We propose and investigate the use of a combined adaptive bandwidth mechanism and decision feedback equalizer (DFE) to mitigate these atmospheric multipath effects. Based on theoretical analysis and simulations of DSNR penalties, BER, and optimum system bandwidths, we show that a DFE improves the outdoor OWC system immunity to ISI in foggy weather while maintaining high throughput and desired low BER.

  11. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  12. Uncertainties in the cluster-cluster correlation function

    NASA Astrophysics Data System (ADS)

    Ling, E. N.; Frenk, C. S.; Barrow, J. D.

    1986-12-01

    The bootstrap resampling technique is applied to estimate sampling errors and significance levels of the two-point correlation functions determined for a subset of the CfA redshift survey of galaxies and a redshift sample of 104 Abell clusters. The angular correlation function for a sample of 1664 Abell clusters is also calculated. The standard errors in xi(r) for the Abell data are found to be considerably larger than quoted 'Poisson errors'. The best estimate for the ratio of the correlation length of Abell clusters (richness class R greater than or equal to 1, distance class D less than or equal to 4) to that of CfA galaxies is 4.2 + 1.4 or - 1.0 (68 percentile error). The enhancement of cluster clustering over galaxy clustering is statistically significant in the presence of resampling errors. The uncertainties found do not include the effects of possible systematic biases in the galaxy and cluster catalogs and could be regarded as lower bounds on the true uncertainty range.

  13. Biometric recognition using 3D ear shape.

    PubMed

    Yan, Ping; Bowyer, Kevin W

    2007-08-01

    Previous works have shown that the ear is a promising candidate for biometric identification. However, in prior work, the preprocessing of ear images has had manual steps and algorithms have not necessarily handled problems caused by hair and earrings. We present a complete system for ear biometrics, including automated segmentation of the ear in a profile view image and 3D shape matching for recognition. We evaluated this system with the largest experimental study to date in ear biometrics, achieving a rank-one recognition rate of 97.8 percent for an identification scenario and an equal error rate of 1.2 percent for a verification scenario on a database of 415 subjects and 1,386 total probes.

  14. Quadrature demultiplexing using a degenerate vector parametric amplifier.

    PubMed

    Lorences-Riesgo, Abel; Liu, Lan; Olsson, Samuel L I; Malik, Rohit; Kumpera, Aleš; Lundström, Carl; Radic, Stojan; Karlsson, Magnus; Andrekson, Peter A

    2014-12-01

    We report on quadrature demultiplexing of a quadrature phase-shift keying (QPSK) signal into two cross-polarized binary phase-shift keying (BPSK) signals with negligible penalty at bit-error rate (BER) equal to 10(-9). The all-optical quadrature demultiplexing is achieved using a degenerate vector parametric amplifier operating in phase-insensitive mode. We also propose and demonstrate the use of a novel and simple phase-locked loop (PLL) scheme based on detecting the envelope of one of the signals after demultiplexing in order to achieve stable quadrature decomposition.

  15. Kalman Filtering Approach to Blind Equalization

    DTIC Science & Technology

    1993-12-01

    NAVAL POSTGRADUATE SCHOOL Monterey, California •GR AD13 DTIC 94-07381 AR 0C199 THESIS S 0 LECTE4u KALMAN FILTERING APPROACH TO BLIND EQUALIZATION by...FILTERING APPROACH 5. FUNDING NUMBERS TO BLIND EQUALIZATION S. AUTHOR(S) Mehmet Kutlu 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) S...which introduces errors due to intersymbol interference. The solution to this problem is provided by equalizers which use a training sequence to adapt to

  16. Reception of Multiple Telemetry Signals via One Dish Antenna

    NASA Technical Reports Server (NTRS)

    Mukai, Ryan; Vilnrotter, Victor

    2010-01-01

    A microwave aeronautical-telemetry receiver system includes an antenna comprising a seven-element planar array of receiving feed horns centered at the focal point of a paraboloidal dish reflector that is nominally aimed at a single aircraft or at multiple aircraft flying in formation. Through digital processing of the signals received by the seven feed horns, the system implements a method of enhanced cancellation of interference, such that it becomes possible to receive telemetry signals in the same frequency channel simultaneously from either or both of two aircraft at slightly different angular positions within the field of view of the antenna, even in the presence of multipath propagation. The present system is an advanced version of the system described in Spatio- Temporal Equalizer for a Receiving-Antenna Feed Array NPO-43077, NASA Tech Briefs, Vol. 34, No. 2 (February 2010), page 32. To recapitulate: The radio-frequency telemetry signals received by the seven elements of the array are digitized, converted to complex baseband form, and sent to a spatio-temporal equalizer that consists mostly of a bank of seven adaptive finite-impulse-response (FIR) filters (one for each element in the array) plus a unit that sums the outputs of the filters. The combination of the spatial diversity of the feedhorn array and the temporal diversity of the filter bank affords better multipath suppression performance than is achievable by means of temporal equalization alone. The FIR filter bank adapts itself in real time to enable reception of telemetry at a low bit error rate, even in the presence of frequency-selective multipath propagation like that commonly found at flight-test ranges. The combination of the array and the filter bank makes it possible to constructively add multipath incoming signals to the corresponding directly arriving signals, thereby enabling reductions in telemetry bit-error rates.

  17. Design of free-space optical transmission system in computer tomography equipment

    NASA Astrophysics Data System (ADS)

    Liu, Min; Fu, Weiwei; Zhang, Tao

    2018-04-01

    Traditional computer tomography (CT) based on capacitive coupling cannot satisfy the high data rate transmission requirement. We design and experimentally demonstrate a free-space optical transmission system for CT equipment at a data rate of 10 Gb / s. Two interchangeable sections of 12 pieces of fiber with equal length is fabricated and tested by our designed laser phase distance measurement system. By locating the 12 collimators in the edge of the circle wheel evenly, the optical propagation characteristics for the 12 wired and wireless paths are similar, which can satisfy the requirement of high-speed CT transmission system. After bit error rate (BER) measurement in several conditions, the BER performances are below the value of 10 - 11, which has the potential in the future application scenario of CT equipment.

  18. Digital transceiver design for two-way AF-MIMO relay systems with imperfect CSI

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang; Chou, Yu-Fei; Chen, Kui-He

    2013-09-01

    In the paper, combined optimization of the terminal precoders/equalizers and single-relay precoder is proposed for an amplify-and-forward (AF) multiple-input multiple-output (MIMO) two-way single-relay system with correlated channel uncertainties. Both terminal transceivers and relay precoding matrix are designed based on the minimum mean square error (MMSE) criterion when terminals are unable to erase completely self-interference due to imperfect correlated channel state information (CSI). This robust joint optimization problem of beamforming and precoding matrices under power constraints belongs to neither concave nor convex so that a nonlinear matrix-form conjugate gradient (MCG) algorithm is applied to explore local optimal solutions. Simulation results show that the robust transceiver design is able to overcome effectively the loss of bit-error-rate (BER) due to inclusion of correlated channel uncertainties and residual self-interference.

  19. Eighth Grade Students' Reading Responses to Encoded Inflectional, Syntactic, Grammatical and Semantic Errors.

    ERIC Educational Resources Information Center

    Williamson, Leon E.; And Others

    A study investigated the reading responses of 60 eighth grade students to encoded inflectional, syntactic, grammatical, and semantic errors. The students were equally divided into three categories based on grade level reading competency and given three Aesopian fables to read. The text of the fables contained the following errors: (1) words to…

  20. Performance analysis of decode-and-forward dual-hop optical spatial modulation with diversity combiner over atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Odeyemi, Kehinde O.; Owolawi, Pius A.; Srivastava, Viranjay M.

    2017-11-01

    Dual-hops transmission is a growing interest technique that can be used to mitigate against atmospheric turbulence along the Free Space Optical (FSO) communication links. This paper analyzes the performance of Decode-and-Forward (DF) dual-hops FSO systems in-conjunction with spatial modulation and diversity combiners over a Gamma-Gamma atmospheric turbulence channel using heterodyne detection. Maximum Ratio Combiner (MRC), Equal Gain Combiner (EGC) and Selection Combiner (SC) are considered at the relay and destination as mitigation tools to improve the system error performance. Power series expansion of modified Bessel function is used to derive the closed form expression for the end-to-end Average Pairwise Error Probability (APEP) expressions for each of the combiners under study and a tight upper bound on the Average Bit Error Rate (ABER) per hop is given. Thus, the overall end-to-end ABER for the dual-hops FSO system is then evaluated. The numerical results depicted that dual-hops transmission systems outperformed the direct link systems. Moreover, the impact of having the same and different combiners at the relay and destination are also presented. The results also confirm that the combination of dual hops transmission with spatial modulation and diversity combiner significantly improves the systems error rate with the MRC combiner offering an optimal performance with respect to variation in atmospheric turbulence, change in links average received SNR and link range of the system.

  1. Modeling the Swift Bat Trigger Algorithm with Machine Learning

    NASA Technical Reports Server (NTRS)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  2. Underwater wireless optical MIMO system with spatial modulation and adaptive power allocation

    NASA Astrophysics Data System (ADS)

    Huang, Aiping; Tao, Linwei; Niu, Yilong

    2018-04-01

    In this paper, we investigate the performance of underwater wireless optical multiple-input multiple-output communication system combining spatial modulation (SM-UOMIMO) with flag dual amplitude pulse position modulation (FDAPPM). Channel impulse response for coastal and harbor ocean water links are obtained by Monte Carlo (MC) simulation. Moreover, we obtain the closed-form and upper bound average bit error rate (BER) expressions for receiver diversity including optical combining, equal gain combining and selected combining. And a novel adaptive power allocation algorithm (PAA) is proposed to minimize the average BER of SM-UOMIMO system. Our numeric results indicate an excellent match between the analytical results and numerical simulations, which confirms the accuracy of our derived expressions. Furthermore, the results show that adaptive PAA outperforms conventional fixed factor PAA and equal PAA obviously. Multiple-input single-output system with adaptive PAA obtains even better BER performance than MIMO one, at the same time reducing receiver complexity effectively.

  3. Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Tanaka, Ken; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of orthogonal frequency division multiplexing (OFDM) and time-domain spreading, while multi-carrier code division multiple access (MC-CDMA) is a combination of OFDM and frequency-domain spreading. In MC-CDMA, a good bit error rate (BER) performance can be achieved by using frequency-domain equalization (FDE), since the frequency diversity gain is obtained. On the other hand, the conventional orthogonal MC DS-CDMA fails to achieve any frequency diversity gain. In this paper, we propose a new orthogonal MC DS-CDMA that can obtain the frequency diversity gain by applying FDE. The conditional BER analysis is presented. The theoretical average BER performance in a frequency-selective Rayleigh fading channel is evaluated by the Monte-Carlo numerical computation method using the derived conditional BER and is confirmed by computer simulation of the orthogonal MC DS-CDMA signal transmission.

  4. Finger vein recognition based on finger crease location

    NASA Astrophysics Data System (ADS)

    Lu, Zhiying; Ding, Shumeng; Yin, Jing

    2016-07-01

    Finger vein recognition technology has significant advantages over other methods in terms of accuracy, uniqueness, and stability, and it has wide promising applications in the field of biometric recognition. We propose using finger creases to locate and extract an object region. Then we use linear fitting to overcome the problem of finger rotation in the plane. The method of modular adaptive histogram equalization (MAHE) is presented to enhance image contrast and reduce computational cost. To extract the finger vein features, we use a fusion method, which can obtain clear and distinguishable vein patterns under different conditions. We used the Hausdorff average distance algorithm to examine the recognition performance of the system. The experimental results demonstrate that MAHE can better balance the recognition accuracy and the expenditure of time compared with three other methods. Our resulting equal error rate throughout the total procedure was 3.268% in a database of 153 finger vein images.

  5. Using pre-distorted PAM-4 signal and parallel resistance circuit to enhance the passive solar cell based visible light communication

    NASA Astrophysics Data System (ADS)

    Wang, Hao-Yu; Wu, Jhao-Ting; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung; Liao, Xin-Lan; Lin, Kun-Hsien; Wu, Wei-Liang; Chen, Yi-Yuan

    2018-01-01

    Using solar cell (or photovoltaic cell) for visible light communication (VLC) is attractive. Apart from acting as a VLC receiver (Rx), the solar cell can provide energy harvesting. This can be used in self-powered smart devices, particularly in the emerging ;Internet of Things (IoT); networks. Here, we propose and demonstrate for the first time using pre-distortion pulse-amplitude-modulation (PAM)-4 signal and parallel resistance circuit to enhance the transmission performance of solar cell Rx based VLC. Pre-distortion is a simple non-adaptive equalization technique that can significantly mitigate the slow charging and discharging of the solar cell. The equivalent circuit model of the solar cell and the operation of using parallel resistance to increase the bandwidth of the solar cell are discussed. By using the proposed schemes, the experimental results show that the data rate of the solar cell Rx based VLC can increase from 20 kbit/s to 1.25 Mbit/s (about 60 times) with the bit error-rate (BER) satisfying the 7% forward error correction (FEC) limit.

  6. Conditional Standard Errors, Reliability and Decision Consistency of Performance Levels Using Polytomous IRT.

    ERIC Educational Resources Information Center

    Wang, Tianyou; And Others

    M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…

  7. Local Observability Analysis of Star Sensor Installation Errors in a SINS/CNS Integration System for Near-Earth Flight Vehicles.

    PubMed

    Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen

    2017-01-16

    Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles.

  8. Optimal design of multichannel equalizers for the structural similarity index.

    PubMed

    Chai, Li; Sheng, Yuxia

    2014-12-01

    The optimization of multichannel equalizers is studied for the structural similarity (SSIM) criteria. The closed-form formula is provided for the optimal equalizer when the mean of the source is zero. The formula shows that the equalizer with maximal SSIM index is equal to the one with minimal mean square error (MSE) multiplied by a positive real number, which is shown to be equal to the inverse of the achieved SSIM index. The relation of the maximal SSIM index to the minimal MSE is also established for given blurring filters and fixed length equalizers. An algorithm is also presented to compute the suboptimal equalizer for the general sources. Various numerical examples are given to demonstrate the effectiveness of the results.

  9. Unobtrusive Biometric System Based on Electroencephalogram Analysis

    NASA Astrophysics Data System (ADS)

    Riera, A.; Soria-Frisch, A.; Caparrini, M.; Grau, C.; Ruffini, G.

    2007-12-01

    Features extracted from electroencephalogram (EEG) recordings have proved to be unique enough between subjects for biometric applications. We show here that biometry based on these recordings offers a novel way to robustly authenticate or identify subjects. In this paper, we present a rapid and unobtrusive authentication method that only uses 2 frontal electrodes referenced to another one placed at the ear lobe. Moreover, the system makes use of a multistage fusion architecture, which demonstrates to improve the system performance. The performance analysis of the system presented in this paper stems from an experiment with 51 subjects and 36 intruders, where an equal error rate (EER) of 3.4% is obtained, that is, true acceptance rate (TAR) of 96.6% and a false acceptance rate (FAR) of 3.4%. The obtained performance measures improve the results of similar systems presented in earlier work.

  10. Hybrid optical CDMA-FSO communications network under spatially correlated gamma-gamma scintillation.

    PubMed

    Jurado-Navas, Antonio; Raddo, Thiago R; Garrido-Balsells, José María; Borges, Ben-Hur V; Olmos, Juan José Vegas; Monroy, Idelfonso Tafur

    2016-07-25

    In this paper, we propose a new hybrid network solution based on asynchronous optical code-division multiple-access (OCDMA) and free-space optical (FSO) technologies for last-mile access networks, where fiber deployment is impractical. The architecture of the proposed hybrid OCDMA-FSO network is thoroughly described. The users access the network in a fully asynchronous manner by means of assigned fast frequency hopping (FFH)-based codes. In the FSO receiver, an equal gain-combining technique is employed along with intensity modulation and direct detection. New analytical formalisms for evaluating the average bit error rate (ABER) performance are also proposed. These formalisms, based on the spatially correlated gamma-gamma statistical model, are derived considering three distinct scenarios, namely, uncorrelated, totally correlated, and partially correlated channels. Numerical results show that users can successfully achieve error-free ABER levels for the three scenarios considered as long as forward error correction (FEC) algorithms are employed. Therefore, OCDMA-FSO networks can be a prospective alternative to deliver high-speed communication services to access networks with deficient fiber infrastructure.

  11. The luminosity function of the CfA Redshift Survey

    NASA Technical Reports Server (NTRS)

    Marzke, R. O.; Huchra, J. P.; Geller, M. J.

    1994-01-01

    We use the CfA Reshift Survey of galaxies with m(sub z) less than or equal to 15.5 to calculate the galaxy luminosity function over the range -13 less than or equal to M(sub z) less than or equal to -22. The sample includes 9063 galaxies distributed over 2.1 sr. For galaxies with velocities cz greater or equal to 2500 km per sec, where the effects of peculiar velocities are small, the luminosity function is well represented by a Schechter function with parameters phi(sub star) = 0.04 +/- 0.01 per cu Mpc, M(sub star) = -18.8 +/- 0.3, and alpha = -1.0 +/- 0.2. When we include all galaxies with cz greater or equal to 500 km per sec, the number of galaxies in the range -16 less than or equal to M(sub z) less than or equal to -13 exceeds the extrapolation of the Schechter function by a factor of 3.1 +/- 0.5. This faint-end excess is not caused by the local peculiar velocity field but may be partially explained by small scale errors in the Zwicky magnitudes. Even with a scale error as large as 0.2 mag per mag, which is unlikely, the excess is still a factor of 1.8 +/- 0.3. If real, this excess affects the interpretation of deep counts of field galaxies.

  12. Passport Officers’ Errors in Face Matching

    PubMed Central

    White, David; Kemp, Richard I.; Jenkins, Rob; Matheson, Michael; Burton, A. Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of ‘fraudulent’ photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately – though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection. PMID:25133682

  13. Passport officers' errors in face matching.

    PubMed

    White, David; Kemp, Richard I; Jenkins, Rob; Matheson, Michael; Burton, A Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.

  14. Improving Passive Time Reversal Underwater Acoustic Communications Using Subarray Processing.

    PubMed

    He, Chengbing; Jing, Lianyou; Xi, Rui; Li, Qinyuan; Zhang, Qunfei

    2017-04-24

    Multichannel receivers are usually employed in high-rate underwater acoustic communication to achieve spatial diversity. In the context of multichannel underwater acoustic communications, passive time reversal (TR) combined with a single-channel adaptive decision feedback equalizer (TR-DFE) is a low-complexity solution to achieve both spatial and temporal focusing. In this paper, we present a novel receiver structure to combine passive time reversal with a low-order multichannel adaptive decision feedback equalizer (TR-MC-DFE) to improve the performance of the conventional TR-DFE. First, the proposed method divides the whole received array into several subarrays. Second, we conduct passive time reversal processing in each subarray. Third, the multiple subarray outputs are equalized with a low-order multichannel DFE. We also investigated different channel estimation methods, including least squares (LS), orthogonal matching pursuit (OMP), and improved proportionate normalized least mean squares (IPNLMS). The bit error rate (BER) and output signal-to-noise ratio (SNR) performances of the receiver algorithms are evaluated using simulation and real data collected in a lake experiment. The source-receiver range is 7.4 km, and the data rate with quadrature phase shift keying (QPSK) signal is 8 kbits/s. The uncoded BER of the single input multiple output (SIMO) systems varies between 1 × 10 - 1 and 2 × 10 - 2 for the conventional TR-DFE, and between 1 × 10 - 2 and 1 × 10 - 3 for the proposed TR-MC-DFE when eight hydrophones are utilized. Compared to conventional TR-DFE, the average output SNR of the experimental data is enhanced by 3 dB.

  15. Improving Passive Time Reversal Underwater Acoustic Communications Using Subarray Processing

    PubMed Central

    He, Chengbing; Jing, Lianyou; Xi, Rui; Li, Qinyuan; Zhang, Qunfei

    2017-01-01

    Multichannel receivers are usually employed in high-rate underwater acoustic communication to achieve spatial diversity. In the context of multichannel underwater acoustic communications, passive time reversal (TR) combined with a single-channel adaptive decision feedback equalizer (TR-DFE) is a low-complexity solution to achieve both spatial and temporal focusing. In this paper, we present a novel receiver structure to combine passive time reversal with a low-order multichannel adaptive decision feedback equalizer (TR-MC-DFE) to improve the performance of the conventional TR-DFE. First, the proposed method divides the whole received array into several subarrays. Second, we conduct passive time reversal processing in each subarray. Third, the multiple subarray outputs are equalized with a low-order multichannel DFE. We also investigated different channel estimation methods, including least squares (LS), orthogonal matching pursuit (OMP), and improved proportionate normalized least mean squares (IPNLMS). The bit error rate (BER) and output signal-to-noise ratio (SNR) performances of the receiver algorithms are evaluated using simulation and real data collected in a lake experiment. The source-receiver range is 7.4 km, and the data rate with quadrature phase shift keying (QPSK) signal is 8 kbits/s. The uncoded BER of the single input multiple output (SIMO) systems varies between 1×10−1 and 2×10−2 for the conventional TR-DFE, and between 1×10−2 and 1×10−3 for the proposed TR-MC-DFE when eight hydrophones are utilized. Compared to conventional TR-DFE, the average output SNR of the experimental data is enhanced by 3 dB. PMID:28441763

  16. Author Correction: Geometric constraints during epithelial jamming

    NASA Astrophysics Data System (ADS)

    Atia, Lior; Bi, Dapeng; Sharma, Yasha; Mitchel, Jennifer A.; Gweon, Bomi; Koehler, Stephan A.; DeCamp, Stephen J.; Lan, Bo; Kim, Jae Hun; Hirsch, Rebecca; Pegoraro, Adrian F.; Lee, Kyu Ha; Starr, Jacqueline R.; Weitz, David A.; Martin, Adam C.; Park, Jin-Ah; Butler, James P.; Fredberg, Jeffrey J.

    2018-06-01

    In the first correction to this Article, the authors added James P. Butler and Jeffrey J. Fredburg as equally contributing authors. However, this was in error; the statement should have remained indicating that Lior Atia, Dapeng Bi and Yasha Sharma contributed equally. This has now been corrected.

  17. Computer simulations of interferometric imaging with the VLT Interferometer and the AMBER instrument

    NASA Astrophysics Data System (ADS)

    Bloecker, Thomas; Hofmann, Karl-Heinz; Przygodda, Frank; Weigelt, Gerd

    2000-07-01

    We present computer simulations of interferometric imaging with the VLT interferometer and the AMBER instrument. These simulations include both the astrophysical modeling of a stellar object by radiative transfer calculations and the simulation of light propagation from the object to the detector (through atmosphere, telescopes, and the AMBER instrument), simulation of photon noise and detector read- out noise, and finally data processing of the interferograms. The results show the dependence of the visibility error bars on the following observational parameters: different seeing during the observation of object and reference star (Fried parameters r0,object equals 2.4 m, r0,ref. equals 2.5 m), different residual tip- tilt error ((delta) tt,object equals 2% of the Airy disk diameter, (delta) tt,ref. equals 0.1%), and object brightness (Kobject equals 3.5 mag and 11 mag, Kref. equals 3.5 mag). Exemplarily, we focus on stars in late stages of stellar evolution and study one of its key objects, the dusty supergiant IRC + 10420 that is rapidly evolving on human timescales. We show computer simulations of VLTI interferometry of IRC + 10420 with two ATs (wide-field mode, i.e. without fiber optics spatial filters) and discuss whether the visibility accuracy is sufficient to distinguish between different theoretical model predictions.

  18. 25 Tb/s transmission over 5,530 km using 16QAM at 5.2 b/s/Hz spectral efficiency.

    PubMed

    Cai, J-X; Batshon, H G; Zhang, H; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Sinkin, O; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2013-01-28

    We transmit 250x100G PDM RZ-16QAM channels with 5.2 b/s/Hz spectral efficiency over 5,530 km using single-stage C-band EDFAs equalized to 40 nm. We use single parity check coded modulation and all channels are decoded with no errors after iterative decoding between a MAP decoder and an LDPC based FEC algorithm. We also observe that the optimum power spectral density is nearly independent of SE, signal baud rate or modulation format in a dispersion uncompensated system.

  19. Comparison of base flows to selected streamflow statistics representative of 1930-2002 in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2012-01-01

    Base flows were compared with published streamflow statistics to assess climate variability and to determine the published statistics that can be substituted for annual and seasonal base flows of unregulated streams in West Virginia. The comparison study was done by the U.S. Geological Survey, in cooperation with the West Virginia Department of Environmental Protection, Division of Water and Waste Management. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Differences in mean annual base flows for five record sub-periods (1930-42, 1943-62, 1963-69, 1970-79, and 1980-2002) range from -14.9 to 14.6 percent when compared to the values for the period 1930-2002. Differences between mean seasonal base flows and values for the period 1930-2002 are less variable for winter and spring, -11.2 to 11.0 percent, than for summer and fall, -47.0 to 43.6 percent. Mean summer base flows (July-September) and mean monthly base flows for July, August, September, and October are approximately equal, within 7.4 percentage points of mean annual base flow. The mean of each of annual, spring, summer, fall, and winter base flows are approximately equal to the annual 50-percent (standard error of 10.3 percent), 45-percent (error of 14.6 percent), 75-percent (error of 11.8 percent), 55-percent (error of 11.2 percent), and 35-percent duration flows (error of 11.1 percent), respectively. The mean seasonal base flows for spring, summer, fall, and winter are approximately equal to the spring 50- to 55-percent (standard error of 6.8 percent), summer 45- to 50-percent (error of 6.7 percent), fall 45-percent (error of 15.2 percent), and winter 60-percent duration flows (error of 8.5 percent), respectively. Annual and seasonal base flows representative of the period 1930-2002 at unregulated streamflow-gaging stations and ungaged locations in West Virginia can be estimated using previously published values of statistics and procedures.

  20. Predicted blood glucose from insulin administration based on values from miscoded glucose meters.

    PubMed

    Raine, Charles H; Pardo, Scott; Parkes, Joan Lee

    2008-07-01

    The proper use of many types of self-monitored blood glucose (SMBG) meters requires calibration to match strip code. Studies have demonstrated the occurrence and impact on insulin dose of coding errors with SMBG meters. This paper reflects additional analyses performed with data from Raine et al. (JDST, 2:205-210, 2007). It attempts to relate potential insulin dose errors to possible adverse blood glucose outcomes when glucose meters are miscoded. Five sets of glucose meters were used. Two sets of meters were autocoded and therefore could not be miscoded, and three sets required manual coding. Two of each set of manually coded meters were deliberately miscoded, and one from each set was properly coded. Subjects (n = 116) had finger stick blood glucose obtained at fasting, as well as at 1 and 2 hours after a fixed meal (Boost((R)); Novartis Medical Nutrition U.S., Basel, Switzerland). Deviations of meter blood glucose results from the reference method (YSI) were used to predict insulin dose errors and resultant blood glucose outcomes based on these deviations. Using insulin sensitivity data, it was determined that, given an actual blood glucose of 150-400 mg/dl, an error greater than +40 mg/dl would be required to calculate an insulin dose sufficient to produce a blood glucose of less than 70 mg/dl. Conversely, an error less than or equal to -70 mg/dl would be required to derive an insulin dose insufficient to correct an elevated blood glucose to less than 180 mg/dl. For miscoded meters, the estimated probability to produce a blood glucose reduction to less than or equal to 70 mg/dl was 10.40%. The corresponding probabilities for autocoded and correctly coded manual meters were 2.52% (p < 0.0001) and 1.46% (p < 0.0001), respectively. Furthermore, the errors from miscoded meters were large enough to produce a calculated blood glucose outcome less than or equal to 50 mg/dl in 42 of 833 instances. Autocoded meters produced zero (0) outcomes less than or equal to 50 mg/dl out of 279 instances, and correctly coded manual meters produced 1 of 416. Improperly coded blood glucose meters present the potential for insulin dose errors and resultant clinically significant hypoglycemia or hyperglycemia. Patients should be instructed and periodically reinstructed in the proper use of blood glucose meters, particularly for meters that require coding.

  1. When the Single Matters more than the Group (II): Addressing the Problem of High False Positive Rates in Single Case Voxel Based Morphometry Using Non-parametric Statistics.

    PubMed

    Scarpazza, Cristina; Nichols, Thomas E; Seramondi, Donato; Maumet, Camille; Sartori, Giuseppe; Mechelli, Andrea

    2016-01-01

    In recent years, an increasing number of studies have used Voxel Based Morphometry (VBM) to compare a single patient with a psychiatric or neurological condition of interest against a group of healthy controls. However, the validity of this approach critically relies on the assumption that the single patient is drawn from a hypothetical population with a normal distribution and variance equal to that of the control group. In a previous investigation, we demonstrated that family-wise false positive error rate (i.e., the proportion of statistical comparisons yielding at least one false positive) in single case VBM are much higher than expected (Scarpazza et al., 2013). Here, we examine whether the use of non-parametric statistics, which does not rely on the assumptions of normal distribution and equal variance, would enable the investigation of single subjects with good control of false positive risk. We empirically estimated false positive rates (FPRs) in single case non-parametric VBM, by performing 400 statistical comparisons between a single disease-free individual and a group of 100 disease-free controls. The impact of smoothing (4, 8, and 12 mm) and type of pre-processing (Modulated, Unmodulated) was also examined, as these factors have been found to influence FPRs in previous investigations using parametric statistics. The 400 statistical comparisons were repeated using two independent, freely available data sets in order to maximize the generalizability of the results. We found that the family-wise error rate was 5% for increases and 3.6% for decreases in one data set; and 5.6% for increases and 6.3% for decreases in the other data set (5% nominal). Further, these results were not dependent on the level of smoothing and modulation. Therefore, the present study provides empirical evidence that single case VBM studies with non-parametric statistics are not susceptible to high false positive rates. The critical implication of this finding is that VBM can be used to characterize neuroanatomical alterations in individual subjects as long as non-parametric statistics are employed.

  2. Stimulus Equalization: Temporary Reduction of Stimulus Complexity to Facilitate Discrimination Learning.

    ERIC Educational Resources Information Center

    Hoko, J. Aaron; LeBlanc, Judith M.

    1988-01-01

    Because disabled learners may profit from procedures using gradual stimulus change, this study utilized a microcomputer to investigate the effectiveness of stimulus equalization, an error reduction procedure involving an abrupt but temporary reduction of dimensional complexity. The procedure was found to be generally effective and implications for…

  3. Chemical composition of cosmic rays with Z greater than or equal to 30 and E greater than or equal to 325 MeV/N

    NASA Technical Reports Server (NTRS)

    Binns, W. R.; Fernandez, J. I.; Israel, M. H.; Klarmann, J.; Maehl, R. C.; Mewaldt, R. A.

    1974-01-01

    Results are presented on the chemical composition of VVH cosmic rays from a series of six high-altitude balloon flights of a large-area, high-resolution electronic detector. The charge composition in the 32 less than or equal to Z less than or equal to 45 interval is found to be inconsistent with S-process nucleosynthesis. The energy spectrum of particles with Z greater than or equal to 32 between 600 and 1500 MeV/N at the top of the atmosphere is measured and is found to be consistent with the 25 less than or equal to Z less than or equal to 27 group within experimental error.

  4. Feedforward compensation for novel dynamics depends on force field orientation but is similar for the left and right arms.

    PubMed

    Reuter, Eva-Maria; Cunnington, Ross; Mattingley, Jason B; Riek, Stephan; Carroll, Timothy J

    2016-11-01

    There are well-documented differences in the way that people typically perform identical motor tasks with their dominant and the nondominant arms. According to Yadav and Sainburg's (Neuroscience 196: 153-167, 2011) hybrid-control model, this is because the two arms rely to different degrees on impedance control versus predictive control processes. Here, we assessed whether differences in limb control mechanisms influence the rate of feedforward compensation to a novel dynamic environment. Seventy-five healthy, right-handed participants, divided into four subsamples depending on the arm (left, right) and direction of the force field (ipsilateral, contralateral), reached to central targets in velocity-dependent curl force fields. We assessed the rate at which participants developed predictive compensation for the force field using intermittent error-clamp trials and assessed both kinematic errors and initial aiming angles in the field trials. Participants who were exposed to fields that pushed the limb toward ipsilateral space reduced kinematic errors more slowly, built up less predictive field compensation, and relied more on strategic reaiming than those exposed to contralateral fields. However, there were no significant differences in predictive field compensation or kinematic errors between limbs, suggesting that participants using either the left or the right arm could adapt equally well to novel dynamics. It therefore appears that the distinct preferences in control mechanisms typically observed for the dominant and nondominant arms reflect a default mode that is based on habitual functional requirements rather than an absolute limit in capacity to access the controller specialized for the opposite limb. Copyright © 2016 the American Physiological Society.

  5. Mixed pro and antisaccade performance in children and adults.

    PubMed

    Irving, Elizabeth L; Tajik-Parvinchi, Diana J; Lillakas, Linda; González, Esther G; Steinbach, Martin J

    2009-02-19

    Pro and antisaccades are usually presented in blocks of similar type but they can also be presented such that prosaccade and antisaccade eye movements are mixed and a cue, usually the shape/colour of the fixation target or the peripheral target, determines which type of eye movement is required in a particular trial. A mixed-saccade task theoretically equalizes the inhibitory requirements for pro and antisaccades. Using a mixed-saccade task paradigm the aims of the study were to: 1) compare pro and antisaccades of children, 2) compare performance of children and adults and 3) explore the effect of increased working memory load in adults. The eye movements of 22 children (5-12 years) and 22 adults (20-51 years) were examined using a video-based eye tracking system (El-Mar Series 2020 Eye Tracker, Toronto, Canada). The task was a mixed-saccade task of pro and antisaccades and the colour of the peripheral target was the cue for whether the required saccade was to be a pro or an antisaccade. The children performed the mixed-saccade task and 11 adults performed the same mixed-saccade task alone and in a dual-task paradigm (together with mental subtraction or number repetition). A second group of 11 adults performed the mixed-saccade task alone. Children made mainly antisaccade errors. The adults' error rates increased in the mental subtraction dual-task condition but both antisaccade and prosaccade errors were made. It was concluded that the increased error rates of these two groups are reflective of different processing dynamics.

  6. Feedforward compensation for novel dynamics depends on force field orientation but is similar for the left and right arms

    PubMed Central

    Cunnington, Ross; Mattingley, Jason B.; Riek, Stephan; Carroll, Timothy J.

    2016-01-01

    There are well-documented differences in the way that people typically perform identical motor tasks with their dominant and the nondominant arms. According to Yadav and Sainburg's (Neuroscience 196: 153–167, 2011) hybrid-control model, this is because the two arms rely to different degrees on impedance control versus predictive control processes. Here, we assessed whether differences in limb control mechanisms influence the rate of feedforward compensation to a novel dynamic environment. Seventy-five healthy, right-handed participants, divided into four subsamples depending on the arm (left, right) and direction of the force field (ipsilateral, contralateral), reached to central targets in velocity-dependent curl force fields. We assessed the rate at which participants developed predictive compensation for the force field using intermittent error-clamp trials and assessed both kinematic errors and initial aiming angles in the field trials. Participants who were exposed to fields that pushed the limb toward ipsilateral space reduced kinematic errors more slowly, built up less predictive field compensation, and relied more on strategic reaiming than those exposed to contralateral fields. However, there were no significant differences in predictive field compensation or kinematic errors between limbs, suggesting that participants using either the left or the right arm could adapt equally well to novel dynamics. It therefore appears that the distinct preferences in control mechanisms typically observed for the dominant and nondominant arms reflect a default mode that is based on habitual functional requirements rather than an absolute limit in capacity to access the controller specialized for the opposite limb. PMID:27582293

  7. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    PubMed

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  8. Spatio-Temporal Equalizer for a Receiving-Antenna Feed Array

    NASA Technical Reports Server (NTRS)

    Mukai, Ryan; Lee, Dennis; Vilnrotter, Victor

    2010-01-01

    A spatio-temporal equalizer has been conceived as an improved means of suppressing multipath effects in the reception of aeronautical telemetry signals, and may be adaptable to radar and aeronautical communication applications as well. This equalizer would be an integral part of a system that would also include a seven-element planar array of receiving feed horns centered at the focal point of a paraboloidal antenna that would be nominally aimed at or near the aircraft that would be the source of the signal that one seeks to receive (see Figure 1). This spatio-temporal equalizer would consist mostly of a bank of seven adaptive finite-impulse-response (FIR) filters one for each element in the array - and the outputs of the filters would be summed (see Figure 2). The combination of the spatial diversity of the feedhorn array and the temporal diversity of the filter bank would afford better multipath-suppression performance than is achievable by means of temporal equalization alone. The seven-element feed array would supplant the single feed horn used in a conventional paraboloidal ground telemetry-receiving antenna. The radio-frequency telemetry signals re ceiv ed by the seven elements of the array would be digitized, converted to complex baseband form, and sent to the FIR filter bank, which would adapt itself in real time to enable reception of telemetry at a low bit error rate, even in the presence of multipath of the type found at many flight test ranges.

  9. Local Observability Analysis of Star Sensor Installation Errors in a SINS/CNS Integration System for Near-Earth Flight Vehicles

    PubMed Central

    Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen

    2017-01-01

    Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles. PMID:28275211

  10. Invariance of parent ratings of the ADHD symptoms in Australian and Malaysian, and north European Australian and Malay Malaysia children: a mean and covariance structures analysis approach.

    PubMed

    Gomez, Rapson

    2009-03-01

    This study used the mean and covariance structures analysis approach to examine the equality or invariance of ratings of the 18 ADHD symptoms. 783 Australian and 928 Malaysian parents provided ratings for an ADHD rating scale. Invariance was tested across these groups (Comparison 1), and North European Australian (n = 623) and Malay Malaysian (n = 571, Comparison 2) groups. Results indicate support for form and item factor loading invariance; more than half the total number of symptoms showed item intercept invariance, and 14 symptoms showed invariance for error variances. There was invariance for both the factor variances and the covariance, and the latent mean scores for hyperactivity/impulsivity. For inattention latent scores, the Malaysian (Comparison 1) and Malay Malaysian (Comparison 2) groups had higher scores. These results indicate fairly good support for invariance for parent ratings of the ADHD symptoms across the groups compared.

  11. Two-sample binary phase 2 trials with low type I error and low sample size

    PubMed Central

    Litwin, Samuel; Basickes, Stanley; Ross, Eric A.

    2017-01-01

    Summary We address design of two-stage clinical trials comparing experimental and control patients. Our end-point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p0 and alternative that it is p0 among controls and p1 > p0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E ≥ m, with two-sample rules of the form E – C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. PMID:28118686

  12. A Stationary North-Finding Scheme for an Azimuth Rotational IMU Utilizing a Linear State Equality Constraint

    PubMed Central

    Yu, Huapeng; Zhu, Hai; Gao, Dayuan; Yu, Meng; Wu, Wenqi

    2015-01-01

    The Kalman filter (KF) has always been used to improve north-finding performance under practical conditions. By analyzing the characteristics of the azimuth rotational inertial measurement unit (ARIMU) on a stationary base, a linear state equality constraint for the conventional KF used in the fine north-finding filtering phase is derived. Then, a constrained KF using the state equality constraint is proposed and studied in depth. Estimation behaviors of the concerned navigation errors when implementing the conventional KF scheme and the constrained KF scheme during stationary north-finding are investigated analytically by the stochastic observability approach, which can provide explicit formulations of the navigation errors with influencing variables. Finally, multiple practical experimental tests at a fixed position are done on a postulate system to compare the stationary north-finding performance of the two filtering schemes. In conclusion, this study has successfully extended the utilization of the stochastic observability approach for analytic descriptions of estimation behaviors of the concerned navigation errors, and the constrained KF scheme has demonstrated its superiority over the conventional KF scheme for ARIMU stationary north-finding both theoretically and practically. PMID:25688588

  13. Mechanisms underlying the influence of saliency on value-based decisions

    PubMed Central

    Chen, Xiaomo; Mihalas, Stefan; Niebur, Ernst; Stuphorn, Veit

    2013-01-01

    Objects in the environment differ in their low-level perceptual properties (e.g., how easily a fruit can be recognized) as well as in their subjective value (how tasty it is). We studied the influence of visual salience on value-based decisions using a two alternative forced choice task, in which human subjects rapidly chose items from a visual display. All targets were equally easy to detect. Nevertheless, both value and salience strongly affected choices made and reaction times. We analyzed the neuronal mechanisms underlying these behavioral effects using stochastic accumulator models, allowing us to characterize not only the averages of reaction times but their full distributions. Independent models without interaction between the possible choices failed to reproduce the observed choice behavior, while models with mutual inhibition between alternative choices produced much better results. Mutual inhibition thus is an important feature of the decision mechanism. Value influenced the amount of accumulation in all models. In contrast, increased salience could either lead to an earlier start (onset model) or to a higher rate (speed model) of accumulation. Both models explained the data from the choice trials equally well. However, salience also affected reaction times in no-choice trials in which only one item was present, as well as error trials. Only the onset model could explain the observed reaction time distributions of error trials and no-choice trials. In contrast, the speed model could not, irrespective of whether the rate increase resulted from more frequent accumulated quanta or from larger quanta. Visual salience thus likely provides an advantage in the onset, not in the processing speed, of value-based decision making. PMID:24167161

  14. The development of an automatic recognition system for earmark and earprint comparisons.

    PubMed

    Junod, Stéphane; Pasquier, Julien; Champod, Christophe

    2012-10-10

    The value of earmarks as an efficient means of personal identification is still subject to debate. It has been argued that the field is lacking a firm systematic and structured data basis to help practitioners to form their conclusions. Typically, there is a paucity of research guiding as to the selectivity of the features used in the comparison process between an earmark and reference earprints taken from an individual. This study proposes a system for the automatic comparison of earprints and earmarks, operating without any manual extraction of key-points or manual annotations. For each donor, a model is created using multiple reference prints, hence capturing the donor within source variability. For each comparison between a mark and a model, images are automatically aligned and a proximity score, based on a normalized 2D correlation coefficient, is calculated. Appropriate use of this score allows deriving a likelihood ratio that can be explored under known state of affairs (both in cases where it is known that the mark has been left by the donor that gave the model and conversely in cases when it is established that the mark originates from a different source). To assess the system performance, a first dataset containing 1229 donors elaborated during the FearID research project was used. Based on these data, for mark-to-print comparisons, the system performed with an equal error rate (EER) of 2.3% and about 88% of marks are found in the first 3 positions of a hitlist. When performing print-to-print transactions, results show an equal error rate of 0.5%. The system was then tested using real-case data obtained from police forces. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Smartphone Text Input Method Performance, Usability, and Preference With Younger and Older Adults.

    PubMed

    Smith, Amanda L; Chaparro, Barbara S

    2015-09-01

    User performance, perceived usability, and preference for five smartphone text input methods were compared with younger and older novice adults. Smartphones are used for a variety of functions other than phone calls, including text messaging, e-mail, and web browsing. Research comparing performance with methods of text input on smartphones reveals a high degree of variability in reported measures, procedures, and results. This study reports on a direct comparison of five of the most common input methods among a population of younger and older adults, who had no experience with any of the methods. Fifty adults (25 younger, 18-35 years; 25 older, 60-84 years) completed a text entry task using five text input methods (physical Qwerty, onscreen Qwerty, tracing, handwriting, and voice). Entry and error rates, perceived usability, and preference were recorded. Both age groups input text equally fast using voice input, but older adults were slower than younger adults using all other methods. Both age groups had low error rates when using physical Qwerty and voice, but older adults committed more errors with the other three methods. Both younger and older adults preferred voice and physical Qwerty input to the remaining methods. Handwriting consistently performed the worst and was rated lowest by both groups. Voice and physical Qwerty input methods proved to be the most effective for both younger and older adults, and handwriting input was the least effective overall. These findings have implications to the design of future smartphone text input methods and devices, particularly for older adults. © 2015, Human Factors and Ergonomics Society.

  16. Monte Carlo Simulations Comparing Fisher Exact Test and Unequal Variances t Test for Analysis of Differences Between Groups in Brief Hospital Lengths of Stay.

    PubMed

    Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U

    2017-12-01

    We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.

  17. A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov

    NASA Astrophysics Data System (ADS)

    Greenough, J. A.; Rider, W. J.

    2004-05-01

    A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are equal for both numerical methods, then PLMDE uniformly produces lower errors than WENO for the fixed computation cost on the test problems considered here.

  18. 5 CFR 1604.6 - Error correction.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... a service member requesting that a TSP contribution be deducted from bonus pay. Within 30 days of... times the number of months it would take for the service member to earn basic pay equal to the dollar... less than twice the number of months it would take for the service member to earn basic pay equal to...

  19. A theory for predicting composite laminate warpage resulting from fabrication

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1975-01-01

    Linear laminate theory is used in conjunction with the moment-curvature relationship to derive equations for predicting end deflections due to warpage without solving the coupled fourth-order partial differential equations of the plate. Using these equations, it is found that a 1 deg error in the orientation angle of one ply is sufficient to produce warpage end deflection equal to two laminate thicknesses in a 10 inch by 10 inch laminate made from 8-ply Mod-I/epoxy. From a sensitivity analysis on the governing parameters, it is found that a 3 deg fiber migration or a void volume ratio of three percent in some plies is sufficient to produce laminate warpage corner deflection equal to several laminate thicknesses. Tabular and graphical data are presented which can be used to identify possible errors contributing to laminate warpage and/or to obtain an a priori assessment when unavoidable errors during fabrication are anticipated.

  20. Automated retina identification based on multiscale elastic registration.

    PubMed

    Figueiredo, Isabel N; Moura, Susana; Neves, Júlio S; Pinto, Luís; Kumar, Sunil; Oliveira, Carlos M; Ramos, João D

    2016-12-01

    In this work we propose a novel method for identifying individuals based on retinal fundus image matching. The method is based on the image registration of retina blood vessels, since it is known that the retina vasculature of an individual is a signature, i.e., a distinctive pattern of the individual. The proposed image registration consists of a multiscale affine registration followed by a multiscale elastic registration. The major advantage of this particular two-step image registration procedure is that it is able to account for both rigid and non-rigid deformations either inherent to the retina tissues or as a result of the imaging process itself. Afterwards a decision identification measure, relying on a suitable normalized function, is defined to decide whether or not the pair of images belongs to the same individual. The method is tested on a data set of 21721 real pairs generated from a total of 946 retinal fundus images of 339 different individuals, consisting of patients followed in the context of different retinal diseases and also healthy patients. The evaluation of its performance reveals that it achieves a very low false rejection rate (FRR) at zero FAR (the false acceptance rate), equal to 0.084, as well as a low equal error rate (EER), equal to 0.053. Moreover, the tests performed by using only the multiscale affine registration, and discarding the multiscale elastic registration, clearly show the advantage of the proposed approach. The outcome of this study also indicates that the proposed method is reliable and competitive with other existing retinal identification methods, and forecasts its future appropriateness and applicability in real-life applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Learning curve of speech recognition.

    PubMed

    Kauppinen, Tomi A; Kaipio, Johanna; Koivikko, Mika P

    2013-12-01

    Speech recognition (SR) speeds patient care processes by reducing report turnaround times. However, concerns have emerged about prolonged training and an added secretarial burden for radiologists. We assessed how much proofing radiologists who have years of experience with SR and radiologists new to SR must perform, and estimated how quickly the new users become as skilled as the experienced users. We studied SR log entries for 0.25 million reports from 154 radiologists and after careful exclusions, defined a group of 11 experienced radiologists and 71 radiologists new to SR (24,833 and 122,093 reports, respectively). Data were analyzed for sound file and report lengths, character-based error rates, and words unknown to the SR's dictionary. Experienced radiologists corrected 6 characters for each report and for new users, 11. Some users presented a very unfavorable learning curve, with error rates not declining as expected. New users' reports were longer, and data for the experienced users indicates that their reports, initially equally lengthy, shortened over a period of several years. For most radiologists, only minor corrections of dictated reports were necessary. While new users adopted SR quickly, with a subset outperforming experienced users from the start, identification of users struggling with SR will help facilitate troubleshooting and support.

  2. Security and matching of partial fingerprint recognition systems

    NASA Astrophysics Data System (ADS)

    Jea, Tsai-Yang; Chavan, Viraj S.; Govindaraju, Venu; Schneider, John K.

    2004-08-01

    Despite advances in fingerprint identification techniques, matching incomplete or partial fingerprints still poses a difficult challenge. While the introduction of compact silicon chip-based sensors that capture only a part of the fingerprint area have made this problem important from a commercial perspective, there is also considerable interest on the topic for processing partial and latent fingerprints obtained at crime scenes. Attempts to match partial fingerprints using singular ridge structures-based alignment techniques fail when the partial print does not include such structures (e.g., core or delta). We present a multi-path fingerprint matching approach that utilizes localized secondary features derived using only the relative information of minutiae. Since the minutia-based fingerprint representation, is an ANSI-NIST standard, our approach has the advantage of being directly applicable to already existing databases. We also analyze the vulnerability of partial fingerprint identification systems to brute force attacks. The described matching approach has been tested on one of FVC2002"s DB1 database11. The experimental results show that our approach achieves an equal error rate of 1.25% and a total error rate of 1.8% (with FAR at 0.2% and FRR at 1.6%).

  3. Infrequent identity mismatches are frequently undetected

    PubMed Central

    Goldinger, Stephen D.

    2014-01-01

    The ability to quickly and accurately match faces to photographs bears critically on many domains, from controlling purchase of age-restricted goods to law enforcement and airport security. Despite its pervasiveness and importance, research has shown that face matching is surprisingly error prone. The majority of face-matching research is conducted under idealized conditions (e.g., using photographs of individuals taken on the same day) and with equal proportions of match and mismatch trials, a rate that is likely not observed in everyday face matching. In four experiments, we presented observers with photographs of faces taken an average of 1.5 years apart and tested whether face-matching performance is affected by the prevalence of identity mismatches, comparing conditions of low (10 %) and high (50 %) mismatch prevalence. Like the low-prevalence effect in visual search, we observed inflated miss rates under low-prevalence conditions. This effect persisted when participants were allowed to correct their initial responses (Experiment 2), when they had to verify every decision with a certainty judgment (Experiment 3) and when they were permitted “second looks” at face pairs (Experiment 4). These results suggest that, under realistic viewing conditions, the low-prevalence effect in face matching is a large, persistent source of errors. PMID:24500751

  4. Bit error rate analysis of the K channel using wavelength diversity

    NASA Astrophysics Data System (ADS)

    Shah, Dhaval; Kothari, Dilip Kumar; Ghosh, Anjan K.

    2017-05-01

    The presence of atmospheric turbulence in the free space causes fading and degrades the performance of a free space optical (FSO) system. To mitigate the turbulence-induced fading, multiple copies of the signal can be transmitted on a different wavelength. Each signal, in this case, will undergo different fadings. This is known as the wavelength diversity technique. Bit error rate (BER) performance of the FSO systems with wavelength diversity under strong turbulence condition is investigated. K-distribution is chosen to model a strong turbulence scenario. The source information is transmitted onto three carrier wavelengths of 1.55, 1.31, and 0.85 μm. The signals at the receiver side are combined using three different methods: optical combining (OC), equal gain combining (EGC), and selection combining (SC). Mathematical expressions are derived for the calculation of the BER for all three schemes (OC, EGC, and SC). Results are presented for the link distance of 2 and 3 km under strong turbulence conditions for all the combining methods. The performance of all three schemes is also compared. It is observed that OC provides better performance than the other two techniques. Proposed method results are also compared with the published article.

  5. A digital clock recovery algorithm based on chromatic dispersion and polarization mode dispersion feedback dual phase detection for coherent optical transmission systems

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Xin, Xiangjun; Zhang, Lijia; Wang, Fu; Zhang, Qi

    2018-02-01

    A new feedback symbol timing recovery technique using timing estimation joint equalization is proposed for digital receivers with two samples/symbol or higher sampling rate. Different from traditional methods, the clock recovery algorithm in this paper adopts another algorithm distinguishing the phases of adjacent symbols, so as to accurately estimate the timing offset based on the adjacent signals with the same phase. The addition of the module for eliminating phase modulation interference before timing estimation further reduce the variance, thus resulting in a smoothed timing estimate. The Mean Square Error (MSE) and Bit Error Rate (BER) of the resulting timing estimate are simulated to allow a satisfactory estimation performance. The obtained clock tone performance is satisfactory for MQAM modulation formats and the Roll-off Factor (ROF) close to 0. In the back-to-back system, when ROF= 0, the maximum of MSE obtained with the proposed approach reaches 0 . 0125. After 100-km fiber transmission, BER decreases to 10-3 with ROF= 0 and OSNR = 11 dB. With the increase in ROF, the performances of MSE and BER become better.

  6. Breaches of health information: are electronic records different from paper records?

    PubMed

    Sade, Robert M

    2010-01-01

    Breaches of electronic medical records constitute a type of healthcare error, but should be considered separately from other types of errors because the national focus on the security of electronic data justifies special treatment of medical information breaches. Guidelines for protecting electronic medical records should be applied equally to paper medical records.

  7. Combining Cryptography with EEG Biometrics

    PubMed Central

    Kazanavičius, Egidijus; Woźniak, Marcin

    2018-01-01

    Cryptographic frameworks depend on key sharing for ensuring security of data. While the keys in cryptographic frameworks must be correctly reproducible and not unequivocally connected to the identity of a user, in biometric frameworks this is different. Joining cryptography techniques with biometrics can solve these issues. We present a biometric authentication method based on the discrete logarithm problem and Bose-Chaudhuri-Hocquenghem (BCH) codes, perform its security analysis, and demonstrate its security characteristics. We evaluate a biometric cryptosystem using our own dataset of electroencephalography (EEG) data collected from 42 subjects. The experimental results show that the described biometric user authentication system is effective, achieving an Equal Error Rate (ERR) of 0.024.

  8. Combining Cryptography with EEG Biometrics.

    PubMed

    Damaševičius, Robertas; Maskeliūnas, Rytis; Kazanavičius, Egidijus; Woźniak, Marcin

    2018-01-01

    Cryptographic frameworks depend on key sharing for ensuring security of data. While the keys in cryptographic frameworks must be correctly reproducible and not unequivocally connected to the identity of a user, in biometric frameworks this is different. Joining cryptography techniques with biometrics can solve these issues. We present a biometric authentication method based on the discrete logarithm problem and Bose-Chaudhuri-Hocquenghem (BCH) codes, perform its security analysis, and demonstrate its security characteristics. We evaluate a biometric cryptosystem using our own dataset of electroencephalography (EEG) data collected from 42 subjects. The experimental results show that the described biometric user authentication system is effective, achieving an Equal Error Rate (ERR) of 0.024.

  9. Experimental demonstration of optical stealth transmission over wavelength-division multiplexing network.

    PubMed

    Zhu, Huatao; Wang, Rong; Pu, Tao; Fang, Tao; Xiang, Peng; Zheng, Jilin; Tang, Yeteng; Chen, Dalei

    2016-08-10

    We propose and experimentally demonstrate an optical stealth transmission system over a 200 GHz-grid wavelength-division multiplexing (WDM) network. The stealth signal is processed by spectral broadening, temporal spreading, and power equalizing. The public signal is suppressed by multiband notch filtering at the stealth channel receiver. The interaction between the public and stealth channels is investigated in terms of public-signal-to-stealth-signal ratio, data rate, notch-filter bandwidth, and public channel number. The stealth signal can transmit over 80 km single-mode fiber with no error. Our experimental results verify the feasibility of optical steganography used over the existing WDM-based optical network.

  10. Linewidth-tolerant real-time 40-Gbit/s 16-QAM self-homodyne detection using a pilot carrier and ISI suppression based on electronic digital processing.

    PubMed

    Nakamura, Moriya; Kamio, Yukiyoshi; Miyazaki, Tetsuya

    2010-01-01

    We experimentally demonstrate linewidth-tolerant real-time 40-Gbit/s(10-Gsymbol/s) 16-quadrature amplitude modulation. We achieved bit-error rates of <10(-9) using an external-cavity laser diode with a linewidth of 200 kHz and <10(-7) using a distributed-feedback laser diode with a linewidth of 30 MHz, thanks to the phase-noise canceling capability provided by self-homodyne detection using a pilot carrier. Pre-equalization based on digital signal processing was employed to suppress intersymbol interference caused by the limited-frequency bandwidth of electrical components.

  11. Predicted Blood Glucose from Insulin Administration Based on Values from Miscoded Glucose Meters

    PubMed Central

    Raine, Charles H.; Pardo, Scott; Parkes, Joan Lee

    2008-01-01

    Objectives The proper use of many types of self-monitored blood glucose (SMBG) meters requires calibration to match strip code. Studies have demonstrated the occurrence and impact on insulin dose of coding errors with SMBG meters. This paper reflects additional analyses performed with data from Raine et al. (JDST, 2:205–210, 2007). It attempts to relate potential insulin dose errors to possible adverse blood glucose outcomes when glucose meters are miscoded. Methods Five sets of glucose meters were used. Two sets of meters were autocoded and therefore could not be miscoded, and three sets required manual coding. Two of each set of manually coded meters were deliberately miscoded, and one from each set was properly coded. Subjects (n = 116) had finger stick blood glucose obtained at fasting, as well as at 1 and 2 hours after a fixed meal (Boost®; Novartis Medical Nutrition U.S., Basel, Switzerland). Deviations of meter blood glucose results from the reference method (YSI) were used to predict insulin dose errors and resultant blood glucose outcomes based on these deviations. Results Using insulin sensitivity data, it was determined that, given an actual blood glucose of 150–400 mg/dl, an error greater than +40 mg/dl would be required to calculate an insulin dose sufficient to produce a blood glucose of less than 70 mg/dl. Conversely, an error less than or equal to -70 mg/dl would be required to derive an insulin dose insufficient to correct an elevated blood glucose to less than 180 mg/dl. For miscoded meters, the estimated probability to produce a blood glucose reduction to less than or equal to 70 mg/dl was 10.40%. The corresponding probabilities for autocoded and correctly coded manual meters were 2.52% (p < 0.0001) and 1.46% (p < 0.0001), respectively. Furthermore, the errors from miscoded meters were large enough to produce a calculated blood glucose outcome less than or equal to 50 mg/dl in 42 of 833 instances. Autocoded meters produced zero (0) outcomes less than or equal to 50 mg/dl out of 279 instances, and correctly coded manual meters produced 1 of 416. Conclusions Improperly coded blood glucose meters present the potential for insulin dose errors and resultant clinically significant hypoglycemia or hyperglycemia. Patients should be instructed and periodically reinstructed in the proper use of blood glucose meters, particularly for meters that require coding. PMID:19885229

  12. Prevalence of refractive error in malay primary school children in suburban area of Kota Bharu, Kelantan, Malaysia.

    PubMed

    Hashim, Syaratul-Emma; Tan, Hui-Ken; Wan-Hazabbah, W H; Ibrahim, Mohtar

    2008-11-01

    Refractive error remains one of the primary causes of visual impairment in children worldwide, and the prevalence of refractive error varies widely. The objective of this study was to determine the prevalence of refractive error and study the possible associated factors inducing refractive error among primary school children of Malay ethnicity in the suburban area of Kota Bharu, Kelantan, Malaysia. A school-based cross-sectional study was performed from January to July 2006 by random selection on Standard 1 to Standard 6 students of 10 primary schools in the Kota Bharu district. Visual acuity assessment was measured using logMAR ETDRS chart. Positive predictive value of uncorrected visual acuity equal or worse than 20/40, was used as a cut-off point for further evaluation by automated refraction and retinoscopic refraction. A total of 840 students were enumerated but only 705 were examined. The prevalence of uncorrected visual impairment was seen in 54 (7.7%) children. The main cause of the uncorrected visual impairment was refractive error which contributed to 90.7% of the total, and with 7.0% prevalence for the studied population. Myopia is the most common type of refractive error among children aged 6 to 12 years with prevalence of 5.4%, followed by hyperopia at 1.0% and astigmatism at 0.6%. A significant positive correlation was noted between myopia development with increasing age (P <0.005), more hours spent on reading books (P <0.005) and background history of siblings with glasses (P <0.005) and whose parents are of higher educational level (P <0.005). Malays in suburban Kelantan (5.4%) have the lowest prevalence of myopia compared with Malays in the metropolitan cities of Kuala Lumpur (9.2%) and Singapore (22.1%). The ethnicity-specific prevalence rate of myopia was the lowest among Malays in Kota Bharu, followed by Kuala Lumpur, and is the highest among Singaporean Malays. Better socio-economic factors could have contributed to higher myopia rates in the cities, since the genetic background of these ethnic Malays are similar.

  13. 10-Gbps optical duobinary signal generated by bandwidth-limited reflective semiconductor optical amplifier in colorless optical network units and compensated by fiber Bragg grating-based equalizer in optical line terminal

    NASA Astrophysics Data System (ADS)

    Fu, Meixia; Zhang, Min; Wang, Danshi; Cui, Yue; Han, Huanhuan

    2016-10-01

    We propose a scheme of optical duobinary-modulated upstream transmission system for reflective semiconductor optical amplifier-based colorless optical network units in 10-Gbps wavelength-division multiplexed passive optical network (WDM-PON), where a fiber Bragg grating (FBG) is adopted as an optical equalizer for better performance. The demodulation module is extremely simple, only needing a binary intensity modulation direct detection receiver. A better received sensitivity of -16.98 dBm at bit rate error (BER)=1.0×10-4 can be achieved at 120 km without FBG, and the BER at the sensitivity of -18.49 dBm can be up to 2.1×10-5 at the transmission distance of 160 km with FBG, which demonstrates the feasibility of our proposed scheme. Moreover, it could be a high cost-effectiveness scheme for WDM-PON in the future.

  14. Adaptive Filter Design Using Type-2 Fuzzy Cerebellar Model Articulation Controller.

    PubMed

    Lin, Chih-Min; Yang, Ming-Shu; Chao, Fei; Hu, Xiao-Min; Zhang, Jun

    2016-10-01

    This paper aims to propose an efficient network and applies it as an adaptive filter for the signal processing problems. An adaptive filter is proposed using a novel interval type-2 fuzzy cerebellar model articulation controller (T2FCMAC). The T2FCMAC realizes an interval type-2 fuzzy logic system based on the structure of the CMAC. Due to the better ability of handling uncertainties, type-2 fuzzy sets can solve some complicated problems with outstanding effectiveness than type-1 fuzzy sets. In addition, the Lyapunov function is utilized to derive the conditions of the adaptive learning rates, so that the convergence of the filtering error can be guaranteed. In order to demonstrate the performance of the proposed adaptive T2FCMAC filter, it is tested in signal processing applications, including a nonlinear channel equalization system, a time-varying channel equalization system, and an adaptive noise cancellation system. The advantages of the proposed filter over the other adaptive filters are verified through simulations.

  15. Using the Mean Shift Algorithm to Make Post Hoc Improvements to the Accuracy of Eye Tracking Data Based on Probable Fixation Locations

    DTIC Science & Technology

    2010-08-01

    astigmatism and other sources, and stay constant from time to time (LC Technologies, 2000). Systematic errors can sometimes reach many degrees of visual angle...Taking the average of all disparities would mean treating each as equally important regardless of whether they are from correct or incorrect mappings. In...likely stop somewhere near the centroid because the large hM basically treats every point equally (or nearly equally if using the multivariate

  16. Using SEM to Analyze Complex Survey Data: A Comparison between Design-Based Single-Level and Model-Based Multilevel Approaches

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Kwok, Oi-man

    2012-01-01

    Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…

  17. Vectorization of optically sectioned brain microvasculature: learning aids completion of vascular graphs by connecting gaps and deleting open-ended segments.

    PubMed

    Kaufhold, John P; Tsai, Philbert S; Blinder, Pablo; Kleinfeld, David

    2012-08-01

    A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by "learned threshold relaxation"; (2) removes spurious segments by "learning to eliminate deletion candidate strands"; and (3) enforces consistency in the joint space of learned vascular graph corrections through "consistency learning." Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with >800(3) voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5-21% and strand elimination performance by 18-57%. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Vectorization of optically sectioned brain microvasculature: Learning aids completion of vascular graphs by connecting gaps and deleting open-ended segments

    PubMed Central

    Kaufhold, John P.; Tsai, Philbert S.; Blinder, Pablo; Kleinfeld, David

    2012-01-01

    A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by “learned threshold relaxation”; (2) removes spurious segments by “learning to eliminate deletion candidate strands”; and (3) enforces consistency in the joint space of learned vascular graph corrections through “consistency learning.” Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with > 8003 voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5 to 21 % and strand elimination performance by 18 to 57 %. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. PMID:22854035

  19. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    NASA Astrophysics Data System (ADS)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  20. DCT-based iris recognition.

    PubMed

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  1. Purification of Logic-Qubit Entanglement.

    PubMed

    Zhou, Lan; Sheng, Yu-Bo

    2016-07-05

    Recently, the logic-qubit entanglement shows its potential application in future quantum communication and quantum network. However, the entanglement will suffer from the noise and decoherence. In this paper, we will investigate the first entanglement purification protocol for logic-qubit entanglement. We show that both the bit-flip error and phase-flip error in logic-qubit entanglement can be well purified. Moreover, the bit-flip error in physical-qubit entanglement can be completely corrected. The phase-flip in physical-qubit entanglement error equals to the bit-flip error in logic-qubit entanglement, which can also be purified. This entanglement purification protocol may provide some potential applications in future quantum communication and quantum network.

  2. Initial Steps Toward Next-Generation, Waveform-Based, Three-Dimensional Models and Metrics to Improve Nuclear Explosion Monitoring in the Middle East

    DTIC Science & Technology

    2008-09-30

    propagation effects by splitting apart the longer period surface waves from the shorter period, depth-sensitive Pnl waves. Problematic, or high-error... Pnl waves. Problematic, or high-error, stations and paths were further analyzed to identify systematic errors with unknown sensor responses and...frequency Pnl components and slower, longer period surface waves. All cut windows are fit simultaneously, allowing equal weighting of phases that may be

  3. Artificial intelligence modeling of cadmium(II) biosorption using rice straw

    NASA Astrophysics Data System (ADS)

    Nasr, Mahmoud; Mahmoud, Alaa El Din; Fawzy, Manal; Radwan, Ahmed

    2017-05-01

    The biosorption efficiency of Cd2+ using rice straw was investigated at room temperature (25 ± 4 °C), contact time (2 h) and agitation rate (5 Hz). Experiments studied the effect of three factors, biosorbent dose BD (0.1 and 0.5 g/L), pH (2 and 7) and initial Cd2+ concentration X (10 and 100 mg/L) at two levels "low" and "high". Results showed that, a variation in X from high to low revealed 31 % increase in the Cd2+ biosorption. However, a discrepancy in pH and BD from low to high achieved 28.60 and 23.61 % increase in the removal of Cd2+, respectively. From 23 factorial design, the effects of BD, pH and X achieved p value equals to 0.2248, 0.1881 and 0.1742, respectively, indicating that the influences are in the order X > pH > BD. Similarly, an adaptive neuro-fuzzy inference system indicated that X is the most influential with training and checking errors of 10.87 and 17.94, respectively. This trend was followed by "pH" with training error (15.80) and checking error (17.39), after that BD with training error (16.09) and checking error (16.29). A feed-forward back-propagation neural network with a configuration 3-6-1 achieved correlation ( R) of 0.99 (training), 0.82 (validation) and 0.97 (testing). Thus, the proposed network is capable of predicting Cd2+ biosorption with high accuracy, while the most significant variable was X.

  4. Usability of a CKD educational website targeted to patients and their family members.

    PubMed

    Diamantidis, Clarissa J; Zuckerman, Marni; Fink, Wanda; Hu, Peter; Yang, Shiming; Fink, Jeffrey C

    2012-10-01

    Web-based technology is critical to the future of healthcare. As part of the Safe Kidney Care cohort study evaluating patient safety in CKD, this study determined how effectively a representative sample of patients with CKD or family members could interpret and use the Safe Kidney Care website (www.safekidneycare.org), an informational website on safety in CKD. Between November of 2011 and January of 2012, persons with CKD or their family members underwent formal usability testing administered by a single interviewer with a second recording observer. Each participant was independently provided a list of 21 tasks to complete, with each task rated as either easily completed/noncritical error or critical error (user cannot complete the task without significant interviewer intervention). Twelve participants completed formal usability testing. Median completion time for all tasks was 17.5 minutes (range=10-44 minutes). In total, 10 participants had greater than or equal to one critical error. There were 55 critical errors in 252 tasks (22%), with the highest proportion of critical errors occurring when participants were asked to find information on treatments that may damage kidneys, find the website on the internet, increase font size, and scroll to the bottom of the webpage. Participants were generally satisfied with the content and usability of the website. Web-based educational materials for patients with CKD should target a wide range of computer literacy levels and anticipate variability in competency in use of the computer and internet.

  5. 29 CFR 1620.25 - Equalization of rates.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false Equalization of rates. 1620.25 Section 1620.25 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE EQUAL PAY ACT § 1620.25 Equalization of rates. Under the express terms of the EPA, when a prohibited sex-based wage differential has...

  6. Misclassification rates for current smokers misclassified as nonsmokers.

    PubMed

    Wells, A J; English, P B; Posner, S F; Wagenknecht, L E; Perez-Stable, E J

    1998-10-01

    This paper provides misclassification rates for current cigarette smokers who report themselves as nonsmokers. Such rates are important in determining smoker misclassification bias in the estimation of relative risks in passive smoking studies. True smoking status, either occasional or regular, was determined for individual current smokers in 3 existing studies of nonsmokers by inspecting the cotinine levels of body fluids. The new data, combined with an approximately equal amount in the 1992 Environmental Protection Agency (EPA) report on passive smoking and lung cancer, yielded misclassification rates that not only had lower standard errors but also were stratified by sex and US minority majority status. The misclassification rates for the important category of female smokers misclassified as never smokers were, respectively, 0.8%, 6.0%, 2.8%, and 15.3% for majority regular, majority occasional, US minority regular, and US minority occasional smokers. Misclassification rates for males were mostly somewhat higher. The new information supports EPA's conclusion that smoker misclassification bias is small. Also, investigators are advised to pay attention to minority/majority status of cohorts when correcting for smoker misclassification bias.

  7. The influence of the uplink noise on the performance of satellite data transmission systems

    NASA Astrophysics Data System (ADS)

    Dewal, Vrinda P.

    The problem of transmission of binary phase shift keying (BPSK) modulated digital data through a bandlimited nonlinear satellite channel in the presence of uplink, downlink Gaussian noise and intersymbol interface is examined. The satellite transponder is represented by a zero memory bandpass nonlinearity, with AM/AM conversion. The proposed optimum linear receiver structure consists of tapped-delay lines followed by a decision device. The linear receiver is designed to minimize the mean square error that is a function of the intersymbol interface, the uplink and the downlink noise. The minimum mean square error equalizer (MMSE) is derived using the Wiener-Kolmogorov theory. In this receiver, the decision about the transmitted signal is made by taking into account the received sequence of present sample, and the interfering past and future samples, which represent the intersymbol interference (ISI). Illustrative examples of the receiver structures are considered for the nonlinear channels with a symmetrical and asymmetrical frequency responses of the transmitter filter. The transponder nonlinearity is simulated by a polynomial using only the first and the third orders terms. A computer simulation determines the tap gain coefficients of the MMSE equalizer that adapt to the various uplink and downlink noise levels. The performance of the MMSE equalizer is evaluated in terms of an estimate of the average probability of error.

  8. On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1984-01-01

    Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)

  9. Driving out errors through tight integration between software and automation.

    PubMed

    Reifsteck, Mark; Swanson, Thomas; Dallas, Mary

    2006-01-01

    A clear case has been made for using clinical IT to improve medication safety, particularly bar-code point-of-care medication administration and computerized practitioner order entry (CPOE) with clinical decision support. The equally important role of automation has been overlooked. When the two are tightly integrated, with pharmacy information serving as a hub, the distinctions between software and automation become blurred. A true end-to-end medication management system drives out errors from the dockside to the bedside. Presbyterian Healthcare Services in Albuquerque has been building such a system since 1999, beginning by automating pharmacy operations to support bar-coded medication administration. Encouraged by those results, it then began layering on software to further support clinician workflow and improve communication, culminating with the deployment of CPOE and clinical decision support. This combination, plus a hard-wired culture of safety, has resulted in a dramatically lower mortality and harm rate that could not have been achieved with a partial solution.

  10. Adaptive filter design using recurrent cerebellar model articulation controller.

    PubMed

    Lin, Chih-Min; Chen, Li-Yang; Yeung, Daniel S

    2010-07-01

    A novel adaptive filter is proposed using a recurrent cerebellar-model-articulation-controller (CMAC). The proposed locally recurrent globally feedforward recurrent CMAC (RCMAC) has favorable properties of small size, good generalization, rapid learning, and dynamic response, thus it is more suitable for high-speed signal processing. To provide fast training, an efficient parameter learning algorithm based on the normalized gradient descent method is presented, in which the learning rates are on-line adapted. Then the Lyapunov function is utilized to derive the conditions of the adaptive learning rates, so the stability of the filtering error can be guaranteed. To demonstrate the performance of the proposed adaptive RCMAC filter, it is applied to a nonlinear channel equalization system and an adaptive noise cancelation system. The advantages of the proposed filter over other adaptive filters are verified through simulations.

  11. A Frequency Agile, Self-Adaptive Serial Link on Xilinx FPGAs

    NASA Astrophysics Data System (ADS)

    Aloisio, A.; Giordano, R.; Izzo, V.; Perrella, S.

    2015-06-01

    In this paper, we focused on the GTX transceiver modules of Xilinx Kintex 7 field-programmable gate arrays (FPGAs), which provide high bandwidth, low jitter on the recovered clock, and an equalization system on the transmitter and the receiver. We present a frequency agile, auto-adaptive serial link. The link is able to take care of the reconfiguration of the GTX parameters in order to fully benefit from the available link bandwidth, by setting the highest line rate. It is designed around an FPGA-embedded microprocessor, which drives the programmable ports of the GTX in order to control the quality of the received data and to easily calculate the bit-error rate in each sampling point of the eye diagram. We present the self-adaptive link project, the description of the test system, and the main results.

  12. Factor validity and reliability of the aberrant behavior checklist-community (ABC-C) in an Indian population with intellectual disability.

    PubMed

    Lehotkay, R; Saraswathi Devi, T; Raju, M V R; Bada, P K; Nuti, S; Kempf, N; Carminati, G Galli

    2015-03-01

    In this study realised in collaboration with the department of psychology and parapsychology of Andhra University, validation of the Aberrant Behavior Checklist-Community (ABC-C) in Telugu, the official language of Andhra Pradesh, one of India's 28 states, was carried out. To assess the factor validity and reliability of this Telugu version, 120 participants with moderate to profound intellectual disability (94 men and 26 women, mean age 25.2, SD 7.1) were rated by the staff of the Lebenshilfe Institution for Mentally Handicapped in Visakhapatnam, Andhra Pradesh, India. Rating data were analysed with a confirmatory factor analysis. The internal consistency was estimated by Cronbach's alpha. To confirm the test-retest reliability, 50 participants were rated twice with an interval of 4 weeks, and 50 were rated by pairs of raters to assess inter-rater reliability. Confirmatory factor analysis revealed that the root mean square error of approximation (RMSEA) was equal to 0.06, the comparative fit index (CFI) was equal to 0.77, and the Tucker Lewis index (TLI) was equal to 0.77, which indicated that the model with five correlated factors had a good fit. Coefficient alpha ranged from 0.85 to 0.92 across the five subscales. Spearman's rank correlation coefficients for inter-rater reliability tests ranged from 0.65 to 0.75, and the correlations for test-retest reliability ranged from 0.58 to 0.76. All reliability coefficients were statistically significant (P < 0.01). The factor validity and reliability of Telugu version of the ABC-C evidenced factor validity and reliability comparable to the original English version and appears to be useful for assessing behaviour disorders in Indian people with intellectual disabilities. © 2014 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  13. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  14. Spatial and temporal distributions of surface mass balance between Concordia and Vostok stations, Antarctica, from combined radar and ice core data: first results and detailed error analysis

    NASA Astrophysics Data System (ADS)

    Le Meur, Emmanuel; Magand, Olivier; Arnaud, Laurent; Fily, Michel; Frezzotti, Massimo; Cavitte, Marie; Mulvaney, Robert; Urbini, Stefano

    2018-05-01

    Results from ground-penetrating radar (GPR) measurements and shallow ice cores carried out during a scientific traverse between Dome Concordia (DC) and Vostok stations are presented in order to infer both spatial and temporal characteristics of snow accumulation over the East Antarctic Plateau. Spatially continuous accumulation rates along the traverse are computed from the identification of three equally spaced radar reflections spanning about the last 600 years. Accurate dating of these internal reflection horizons (IRHs) is obtained from a depth-age relationship derived from volcanic horizons and bomb testing fallouts on a DC ice core and shows a very good consistency when tested against extra ice cores drilled along the radar profile. Accumulation rates are then inferred by accounting for density profiles down to each IRH. For the latter purpose, a careful error analysis showed that using a single and more accurate density profile along a DC core provided more reliable results than trying to include the potential spatial variability in density from extra (but less accurate) ice cores distributed along the profile. The most striking feature is an accumulation pattern that remains constant through time with persistent gradients such as a marked decrease from 26 mm w.e. yr-1 at DC to 20 mm w.e. yr-1 at the south-west end of the profile over the last 234 years on average (with a similar decrease from 25 to 19 mm w.e. yr-1 over the last 592 years). As for the time dependency, despite an overall consistency with similar measurements carried out along the main East Antarctic divides, interpreting possible trends remains difficult. Indeed, error bars in our measurements are still too large to unambiguously infer an apparent time increase in accumulation rate. For the proposed absolute values, maximum margins of error are in the range 4 mm w.e. yr-1 (last 234 years) to 2 mm w.e. yr-1 (last 592 years), a decrease with depth mainly resulting from the time-averaging when computing accumulation rates.

  15. Experiential Teaching Increases Medication Calculation Accuracy Among Baccalaureate Nursing Students.

    PubMed

    Hurley, Teresa V

    Safe medication administration is an international goal. Calculation errors cause patient harm despite education. The research purpose was to evaluate the effectiveness of an experiential teaching strategy to reduce errors in a sample of 78 baccalaureate nursing students at a Northeastern college. A pretest-posttest design with random assignment into equal-sized groups was used. The experiential strategy was more effective than the traditional method (t = -0.312, df = 37, p = .004, 95% CI) with a reduction in calculation errors. Evaluations of error type and teaching strategies are indicated to facilitate course and program changes.

  16. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.

  17. Two-sample binary phase 2 trials with low type I error and low sample size.

    PubMed

    Litwin, Samuel; Basickes, Stanley; Ross, Eric A

    2017-04-30

    We address design of two-stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1  > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E⩾m, with two-sample rules of the form E - C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Simulation of the Universal-Time Diurnal Variation of the Global Electric Circuit Charging Rate

    NASA Technical Reports Server (NTRS)

    Mackerras, David; Darveniza, Mat; Orville, Richard E.; Williams, Earle R.; Goodman, Steven J.

    1999-01-01

    A global lightning model that includes diurnal and annual lightning variation, and total flash density versus latitude for each major land and ocean, has been used as the basis for simulating the global electric circuit charging rate. A particular objective has been to reconcile the difference in amplitude ratios [AR=(max-min)/mean] between global lightning diurnal variation (AR approximately equals 0.8) and the diurnal variation of typical atmospheric potential gradient curves (AR approximately equals 0.35). A constraint on the simulation is that the annual mean charging current should be about 1000 A. The global lightning model shows that negative ground flashes can contribute, at most, about 10-15% of the required current. For the purpose of the charging rate simulation, it was assumed that each ground flash contributes 5 C to the charging process. It was necessary to assume that all electrified clouds contribute to charging by means other than lightning, that the total flash rate can serve as an indirect indicator of the rate of charge transfer, and that oceanic electrified clouds contribute to charging even though they are relatively inefficient in producing lightning. It was also found necessary to add a diurnally invariant charging current component. By trial and error it was found that charging rate diurnal variation curves could be produced with amplitude ratios and general shapes similar to those of the potential gradient diurnal variation curves measured over ocean and arctic regions during voyages of the Carnegie Institute research vessels. The comparisons were made for the northern winter (Nov.-Feb.), the equinox (Mar., Apr., Sept., Oct.), the northern summer (May-Aug.), and the whole year.

  19. Reliability of drivers in urban intersections.

    PubMed

    Gstalter, Herbert; Fastenmeier, Wolfgang

    2010-01-01

    The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.

  20. Purification of Logic-Qubit Entanglement

    PubMed Central

    Zhou, Lan; Sheng, Yu-Bo

    2016-01-01

    Recently, the logic-qubit entanglement shows its potential application in future quantum communication and quantum network. However, the entanglement will suffer from the noise and decoherence. In this paper, we will investigate the first entanglement purification protocol for logic-qubit entanglement. We show that both the bit-flip error and phase-flip error in logic-qubit entanglement can be well purified. Moreover, the bit-flip error in physical-qubit entanglement can be completely corrected. The phase-flip in physical-qubit entanglement error equals to the bit-flip error in logic-qubit entanglement, which can also be purified. This entanglement purification protocol may provide some potential applications in future quantum communication and quantum network. PMID:27377165

  1. Analytical investigation of adaptive control of radiated inlet noise from turbofan engines

    NASA Technical Reports Server (NTRS)

    Risi, John D.; Burdisso, Ricardo A.

    1994-01-01

    An analytical model has been developed to predict the resulting far field radiation from a turbofan engine inlet. A feedforward control algorithm was simulated to predict the controlled far field radiation from the destructive combination of fan noise and secondary control sources. Numerical results were developed for two system configurations, with the resulting controlled far field radiation patterns showing varying degrees of attenuation and spillover. With one axial station of twelve control sources and error sensors with equal relative angular positions, nearly global attenuation is achieved. Shifting the angular position of one error sensor resulted in an increase of spillover to the extreme sidelines. The complex control inputs for each configuration was investigated to identify the structure of the wave pattern created by the control sources, giving an indication of performance of the system configuration. It is deduced that the locations of the error sensors and the control source configuration are equally critical to the operation of the active noise control system.

  2. Ion Figuring of Replicated X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Cantey, Thomas M.; Gregory, Don A.

    1997-01-01

    This investigation included experiments to demonstrate ion beam figuring effects on electroless nickel with the expressed desire to figure X-ray optic mandrels. It was important to establish that ion beam figuring did not induce any adverse effects to the nickel surface. The ion beam has consistently been shown to be an excellent indicator of the quality of the subsurface. Polishing is not the only cause for failure in the ion beam final figuring process, the material composition is equally important. Only by careful consideration of both these factors can the ion beam final figuring process achieve its greatest potential. The secondary goal was to construct a model for representing the ion beam material removal rate. Representing the ion beam removal rate is only an approximation and has a number of limiting factors. The resolution of the metrology apparatus limits the modeling of the beam function as well. As the surface error corrections demand more precision in the final figuring, the model representing beam function must be equally precise. The precision to which the beam function can be represented is not only determined by the model but also by the measurements producing that model. The method developed for determining the beam function has broad application to any material destined to be ion beam figured.

  3. High-frequency video capture and a computer program with frame-by-frame angle determination functionality as tools that support judging in artistic gymnastics.

    PubMed

    Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej

    2015-01-01

    The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.

  4. Spectroscopic ellipsometer based on direct measurement of polarization ellipticity.

    PubMed

    Watkins, Lionel R

    2011-06-20

    A polarizer-sample-Wollaston prism analyzer ellipsometer is described in which the ellipsometric angles ψ and Δ are determined by direct measurement of the elliptically polarized light reflected from the sample. With the Wollaston prism initially set to transmit p- and s-polarized light, the azimuthal angle P of the polarizer is adjusted until the two beams have equal intensity. This condition yields ψ=±P and ensures that the reflected elliptically polarized light has an azimuthal angle of ±45° and maximum ellipticity. Rotating the Wollaston prism through 45° and adjusting the analyzer azimuth until the two beams again have equal intensity yields the ellipticity that allows Δ to be determined via a simple linear relationship. The errors produced by nonideal components are analyzed. We show that the polarizer dominates these errors but that for most practical purposes, the error in ψ is negligible and the error in Δ may be corrected exactly. A native oxide layer on a silicon substrate was measured at a single wavelength and multiple angles of incidence and spectroscopically at a single angle of incidence. The best fit film thicknesses obtained were in excellent agreement with those determined using a traditional null ellipsometer.

  5. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  6. Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.

    PubMed

    Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C

    2013-12-30

    We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.

  7. Two-dimensional signal processing using a morphological filter for holographic memory

    NASA Astrophysics Data System (ADS)

    Kondo, Yo; Shigaki, Yusuke; Yamamoto, Manabu

    2012-03-01

    Today, along with the wider use of high-speed information networks and multimedia, it is increasingly necessary to have higher-density and higher-transfer-rate storage devices. Therefore, research and development into holographic memories with three-dimensional storage areas is being carried out to realize next-generation large-capacity memories. However, in holographic memories, interference between bits, which affect the detection characteristics, occurs as a result of aberrations such as the deviation of a wavefront in an optical system. In this study, we pay particular attention to the nonlinear factors that cause bit errors, where filters with a Volterra equalizer and the morphologies are investigated as a means of signal processing.

  8. SNR characteristics of 850-nm OEIC receiver with a silicon avalanche photodetector.

    PubMed

    Youn, Jin-Sung; Lee, Myung-Jae; Park, Kang-Yeob; Rücker, Holger; Choi, Woo-Young

    2014-01-13

    We investigate signal-to-noise ratio (SNR) characteristics of an 850-nm optoelectronic integrated circuit (OEIC) receiver fabricated with standard 0.25-µm SiGe bipolar complementary metal-oxide-semiconductor (BiCMOS) technology. The OEIC receiver is composed of a Si avalanche photodetector (APD) and BiCMOS analog circuits including a transimpedance amplifier with DC-balanced buffer, a tunable equalizer, a limiting amplifier, and an output buffer with 50-Ω loads. We measure APD SNR characteristics dependence on the reverse bias voltage as well as BiCMOS circuit noise characteristics. From these, we determine the SNR characteristics of the entire OEIC receiver, and finally, the results are verified with bit-error rate measurement.

  9. Study of SPM tolerances of electronically compensated DML based systems.

    PubMed

    Papagiannakis, I; Klonidis, D; Birbas, Alexios N; Kikidis, J; Tomkos, I

    2009-05-25

    This paper experimentally investigates the effectiveness of electronic dispersion compensation (EDC) for signals limited by self phase modulation (SPM) and various dispersion levels. The sources considered are low-cost conventional directly modulated lasers (DMLs), fabricated for operation at 2.5 Gb/s but modulated at 10 Gb/s. Performance improvement is achieved by means of electronic feed-forward and decision-feedback equalization (FFE/DFE) at the receiver end. Experimental studies consider both transient and adiabatic chirp dominated DMLs sources. The improvement is evaluated in terms of required optical signal-to-noise ratio (ROSNR) for bit-error-rate (BER) values of 10(-3) versus launch power over uncompensated links of standard single mode fiber (SSMF).

  10. A hybrid CATV/16-QAM-OFDM visible laser light communication system

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Yu; Li, Chung-Yi; Lu, Hai-Han; Chen, Chia-Yi; Jhang, Tai-Wei; Ruan, Sheng-Siang; Wu, Kuan-Hung

    2014-10-01

    A visible laser light communication (VLLC) system employing a vertical cavity surface emitting laser and spatial light modulator with hybrid CATV/16-QAM-OFDM modulating signals over a 5 m free-space link is proposed and demonstrated. With the assistance of a push-pull scheme, low-noise amplifier, and equalizer, good performances of composite second-order and composite triple beat are obtained, accompanied by an acceptable carrier-to-noise ratio performance for a CATV signal, and a low bit error rate value and clear constellation map are achieved for a 16-QAM-OFDM signal. Such a hybrid CATV/16-QAM-OFDM VLLC system would be attractive for providing services including CATV, Internet and telecommunication services.

  11. A Study of the Groundwater Level Spatial Variability in the Messara Valley of Crete

    NASA Astrophysics Data System (ADS)

    Varouchakis, E. A.; Hristopulos, D. T.; Karatzas, G. P.

    2009-04-01

    The island of Crete (Greece) has a dry sub-humid climate and marginal groundwater resources, which are extensively used for agricultural activities and human consumption. The Messara valley is located in the south of the Heraklion prefecture, it covers an area of 398 km2, and it is the largest and most productive valley of the island. Over-exploitation during the past thirty (30) years has led to a dramatic decrease of thirty five (35) meters in the groundwater level. Possible future climatic changes in the Mediterranean region, potential desertification, population increase, and extensive agricultural activity generate concern over the sustainability of the water resources of the area. The accurate estimation of the water table depth is important for an integrated groundwater resource management plan. This study focuses on the Mires basin of the Messara valley for reasons of hydro-geological data availability and geological homogeneity. The research goal is to model and map the spatial variability of the basin's groundwater level accurately. The data used in this study consist of seventy (70) piezometric head measurements for the hydrological year 2001-2002. These are unevenly distributed and mostly concentrated along a temporary river that crosses the basin. The range of piezometric heads varies from an extreme low value of 9.4 meters above sea level (masl) to 62 masl, for the wet period of the year (October to April). An initial goal of the study is to develop spatial models for the accurate generation of static maps of groundwater level. At a second stage, these maps should extend the models to dynamic (space-time) situations for the prediction of future water levels. Preliminary data analysis shows that the piezometric head variations are not normally distributed. Several methods including Box-Cox transformation and a modified version of it, transgaussian Kriging, and Gaussian anamorphosis have been used to obtain a spatial model for the piezometric head. A trend model was constructed that accounted for the distance of the wells from the river bed. The spatial dependence of the fluctuations was studied by fitting isotropic and anisotropic empirical variograms with classical models, the Matérn model and the Spartan variogram family (Hristopulos, 2003; Hristopoulos and Elogne, 2007). The most accurate results, mean absolute prediction error of 4.57 masl, were obtained using the modified Box-Cox transform of the original data. The exponential and the isotropic Spartan variograms provided the best fits to the experimental variogram. Using Ordinary Kriging with either variogram function gave a mean absolute estimation error of 4.57 masl based on leave-one-out cross validation. The bias error of the predictions was calculated equal to -0.38 masl and the correlation coefficient of the predictions with respect of the original data equal to 0.8. The estimates located on the borders of the study domain presented a higher prediction error that varies from 8 to 14 masl due to the limited number of neighbor data. The maximum estimation error, observed at the extreme low value calculation, was 23 masl. The method of locally weighted regression (LWR), (NIST/SEMATECH 2009) was also investigated as an alternative approach for spatial modeling. The trend calculated from a second order LWR method showed a remarkable fit to the original data marked by a mean absolute estimation error of 4.4 masl. The bias prediction error was calculated equal to -0.16 masl and the correlation coefficient between predicted and original data equal to 0.88 masl. Higher estimation errors were found at the same locations and vary within the same range. The extreme low value calculation error has improved to 21 masl. Plans for future research include the incorporation of spatial anisotropy in the kriging algorithm, the investigation of kernel functions other than the tricube in LWR, as well as the use of locally adapted bandwidth values. Furthermore, pumping rates for fifty eight (58) of the seventy (70) wells are available display a correlation coefficient of -0.6 with the respective ground water levels. A Digital Elevation Model (DEM) of the area will provide additional information about the unsampled locations of the basin. The pumping rates and the DEM will be used as secondary information in a co-kriging approach, leading to more accurate estimation of the basin's water table. NIST/SEMATECH e-Handbook of Statitical Methods, http://www.itl.nist.gov/div898/handbook/, 12/01/09. D.T. Hristopulos, "Spartan Gibbs random field models for geostatistical applications," SIAM J. Scient. Comput., vol. 24, no. 6, pp. 2125-2162, 2003 D.T. Hristopulos and S. Elogne, "Analytic properties and covariance functions for a new class of generalized Gibbs random fields," IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 53, no 12, pp. 4667-4679, 2007

  12. 29 CFR 1620.12 - Wage “rate.”

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE EQUAL PAY ACT § 1620.12 Wage... gender than the other for the performance of equal work, the higher rate serves as a wage standard. When a violation of the Act is established, the higher rate paid for equal work is the standard to which...

  13. Is 50 Hz high enough ECG sampling frequency for accurate HRV analysis?

    PubMed

    Mahdiani, Shadi; Jeyhani, Vala; Peltokangas, Mikko; Vehkaoja, Antti

    2015-01-01

    With the worldwide growth of mobile wireless technologies, healthcare services can be provided at anytime and anywhere. Usage of wearable wireless physiological monitoring system has been extensively increasing during the last decade. These mobile devices can continuously measure e.g. the heart activity and wirelessly transfer the data to the mobile phone of the patient. One of the significant restrictions for these devices is usage of energy, which leads to requiring low sampling rate. This article is presented in order to investigate the lowest adequate sampling frequency of ECG signal, for achieving accurate enough time domain heart rate variability (HRV) parameters. For this purpose the ECG signals originally measured with high 5 kHz sampling rate were down-sampled to simulate the measurement with lower sampling rate. Down-sampling loses information, decreases temporal accuracy, which was then restored by interpolating the signals to their original sampling rates. The HRV parameters obtained from the ECG signals with lower sampling rates were compared. The results represent that even when the sampling rate of ECG signal is equal to 50 Hz, the HRV parameters are almost accurate with a reasonable error.

  14. Noninferiority trial designs for odds ratios and risk differences.

    PubMed

    Hilton, Joan F

    2010-04-30

    This study presents constrained maximum likelihood derivations of the design parameters of noninferiority trials for binary outcomes with the margin defined on the odds ratio (ψ) or risk-difference (δ) scale. The derivations show that, for trials in which the group-specific response rates are equal under the point-alternative hypothesis, the common response rate, π(N), is a fixed design parameter whose value lies between the control and experimental rates hypothesized at the point-null, {π(C), π(E)}. We show that setting π(N) equal to the value of π(C) that holds under H(0) underestimates the overall sample size requirement. Given {π(C), ψ} or {π(C), δ} and the type I and II error rates, or algorithm finds clinically meaningful design values of π(N), and the corresponding minimum asymptotic sample size, N=n(E)+n(C), and optimal allocation ratio, γ=n(E)/n(C). We find that optimal allocations are increasingly imbalanced as ψ increases, with γ(ψ)<1 and γ(δ)≈1/γ(ψ), and that ranges of allocation ratios map to the minimum sample size. The latter characteristic allows trialists to consider trade-offs between optimal allocation at a smaller N and a preferred allocation at a larger N. For designs with relatively large margins (e.g. ψ>2.5), trial results that are presented on both scales will differ in power, with more power lost if the study is designed on the risk-difference scale and reported on the odds ratio scale than vice versa. 2010 John Wiley & Sons, Ltd.

  15. Genotyping and inflated type I error rate in genome-wide association case/control studies

    PubMed Central

    Sampson, Joshua N; Zhao, Hongyu

    2009-01-01

    Background One common goal of a case/control genome wide association study (GWAS) is to find SNPs associated with a disease. Traditionally, the first step in such studies is to assign a genotype to each SNP in each subject, based on a statistic summarizing fluorescence measurements. When the distributions of the summary statistics are not well separated by genotype, the act of genotype assignment can lead to more potential problems than acknowledged by the literature. Results Specifically, we show that the proportions of each called genotype need not equal the true proportions in the population, even as the number of subjects grows infinitely large. The called genotypes for two subjects need not be independent, even when their true genotypes are independent. Consequently, p-values from tests of association can be anti-conservative, even when the distributions of the summary statistic for the cases and controls are identical. To address these problems, we propose two new tests designed to reduce the inflation in the type I error rate caused by these problems. The first algorithm, logiCALL, measures call quality by fully exploring the likelihood profile of intensity measurements, and the second algorithm avoids genotyping by using a likelihood ratio statistic. Conclusion Genotyping can introduce avoidable false positives in GWAS. PMID:19236714

  16. Usability of a CKD Educational Website Targeted to Patients and Their Family Members

    PubMed Central

    Zuckerman, Marni; Fink, Wanda; Hu, Peter; Yang, Shiming; Fink, Jeffrey C.

    2012-01-01

    Summary Background and objectives Web-based technology is critical to the future of healthcare. As part of the Safe Kidney Care cohort study evaluating patient safety in CKD, this study determined how effectively a representative sample of patients with CKD or family members could interpret and use the Safe Kidney Care website (www.safekidneycare.org), an informational website on safety in CKD. Design, setting, participants, & measurements Between November of 2011 and January of 2012, persons with CKD or their family members underwent formal usability testing administered by a single interviewer with a second recording observer. Each participant was independently provided a list of 21 tasks to complete, with each task rated as either easily completed/noncritical error or critical error (user cannot complete the task without significant interviewer intervention). Results Twelve participants completed formal usability testing. Median completion time for all tasks was 17.5 minutes (range=10–44 minutes). In total, 10 participants had greater than or equal to one critical error. There were 55 critical errors in 252 tasks (22%), with the highest proportion of critical errors occurring when participants were asked to find information on treatments that may damage kidneys, find the website on the internet, increase font size, and scroll to the bottom of the webpage. Participants were generally satisfied with the content and usability of the website. Conclusions Web-based educational materials for patients with CKD should target a wide range of computer literacy levels and anticipate variability in competency in use of the computer and internet. PMID:22798537

  17. An Expert System for the Evaluation of Cost Models

    DTIC Science & Technology

    1990-09-01

    contrast to the condition of equal error variance, called homoscedasticity. (Reference: Applied Linear Regression Models by John Neter - page 423...normal. (Reference: Applied Linear Regression Models by John Neter - page 125) Click Here to continue -> Autocorrelation Click Here for the index - Index...over time. Error terms correlated over time are said to be autocorrelated or serially correlated. (REFERENCE: Applied Linear Regression Models by John

  18. Performance of optimum detector structures for noisy intersymbol interference channels

    NASA Technical Reports Server (NTRS)

    Womer, J. D.; Fritchman, B. D.; Kanal, L. N.

    1971-01-01

    The errors which arise in transmitting digital information by radio or wireline systems because of additive noise from successively transmitted signals interfering with one another are described. The probability of error and the performance of optimum detector structures are examined. A comparative study of the performance of certain detector structures and approximations to them, and the performance of a transversal equalizer are included.

  19. HyDEn: A Hybrid Steganocryptographic Approach for Data Encryption Using Randomized Error-Correcting DNA Codes

    PubMed Central

    Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge

    2013-01-01

    This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach. PMID:23984392

  20. Demand Controlled Economizer Cycles: A Direct Digital Control Scheme for Heating, Ventilating, and Air Conditioning Systems,

    DTIC Science & Technology

    1984-05-01

    Control Ignored any error of 1/10th degree or less. This was done by setting the error term E and the integral sum PREINT to zero If then absolute value of...signs of two errors jeq tdiff if equal, jump clr @preint else zero integal sum tdiff mov @diff,rl fetch absolute value of OAT-RAT ci rl,25 is...includes a heating coil and thermostatic control to maintain the air in this path at an elevated temperature, typically around 80 degrees Farenheit (80 F

  1. Response Surface Analysis of Experiments with Random Blocks

    DTIC Science & Technology

    1988-09-01

    partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF

  2. An integrated 12.5-Gb/s optoelectronic receiver with a silicon avalanche photodetector in standard SiGe BiCMOS technology.

    PubMed

    Youn, Jin-Sung; Lee, Myung-Jae; Park, Kang-Yeob; Rücker, Holger; Choi, Woo-Young

    2012-12-17

    An optoelectronic integrated circuit (OEIC) receiver is realized with standard 0.25-μm SiGe BiCMOS technology for 850-nm optical interconnect applications. The OEIC receiver consists of a Si avalanche photodetector, a transimpedance amplifier with a DC-balanced buffer, a tunable equalizer, and a limiting amplifier. The fabricated OEIC receiver successfully detects 12.5-Gb/s 2(31)-1 pseudorandom bit sequence optical data with the bit-error rate less than 10(-12) at incident optical power of -7 dBm. The OEIC core has 1000 μm x 280 μm chip area, and consumes 59 mW from 2.5-V supply. To the best of our knowledge, this OEIC receiver achieves the highest data rate with the smallest sensitivity as well as the best power efficiency among integrated OEIC receivers fabricated with standard Si technology.

  3. A pattern jitter free AFC scheme for mobile satellite systems

    NASA Technical Reports Server (NTRS)

    Yoshida, Shousei

    1993-01-01

    This paper describes a scheme for pattern jitter free automatic frequency control (AFC) with a wide frequency acquisition range. In this scheme, equalizing signals fed to the frequency discriminator allow pattern jitter free performance to be achieved for all roll-off factors. In order to define the acquisition range, frequency discrimination characateristics are analyzed on a newly derived frequency domain model. As a result, it is shown that a sufficiently wide acquisition range over a given system symbol rate can be achieved independent of symbol timing errors. Additionally, computer simulation demonstrates that frequency jitter performance improves in proportion to E(sub b)/N(sub 0) because pattern-dependent jitter is suppressed in the discriminator output. These results show significant promise for applciation to mobile satellite systems, which feature relatively low symbol rate transmission with an approximately 0.4-0.7 roll-off factor.

  4. L'effet Doppler et le décalage vers le rouge en mécanique rationnelle: applications et verifications experimentales.

    PubMed

    Loiseaus, J

    1968-07-01

    Shifts z and z' toward the red of the galaxy NGC 5668 for a beam of 21 cm, z measured in radioastronomy with a frequency meter and z' measured in optics with a spectrograph, not being equal, it follows that the speed of light from a galaxy c ' is not equal to that of a galaxy c which is measured on earth from stationary source. The Doppler empirical formula cannot be explained in classical mechanics since it is in contradiction with it. As for the theory of relativity c ' = c from a postulate and z' = z. If we consider the universe represented on a three-dimensional space (H), non-Euclidian, with Euclidian connection plunged in a Riemannine four-dimension space (E), a certain universal time, like that of an astronomer, can be defined and its course calculated in relation to this time: it will necessarily be confounded with the atomic clock time, but c ' not equal c and z' not equal z: the Doppler formula is not accurate. However, c ' and c as well as z' and z are so close in all the experiments carried out on earth, even when an artificial satellite is used, that the errors made in using the Doppler formula are clearly inferior to experimental errors.

  5. Spatial range of illusory effects in Müller-Lyer figures.

    PubMed

    Predebon, J

    2001-11-01

    The spatial range of the illusory effects in Müller-Lyer (M-L) figures was examined in three experiments. Experiments 1 and 2 assessed the pattern of bisection errors along the shaft of the standard or double-angle (experiment 1) and the single-angle (experiment 2) M-L figures: Subjects bisected the shaft and the resulting two half-segments of the shaft to produce apparently equal quarters, and then each of the quarters to produce eight equal-appearing segments. The bisection judgments of each segment were referenced to the segment's physical midpoints. The expansion or wings-out and the contraction or wings-in figures yielded similar patterns of bisection errors. For the standard M-L figures, there were significant errors in bisecting each half, and each end-quarter, but not the two central quarters of the shaft. For the single-angle M-L figures, there were significant errors in bisecting the length of the shaft, the half-segment, and the quarter, of the shaft adjacent to the vertex but not the second quarter from the vertex nor in dividing the half of the shaft at the open end of the figure into four equal intervals. Experiment 3 assessed the apparent length of the half-segment of the shaft at the open end of the single-angle figures. Length judgments were unaffected by the vertex at the opposite end of the shaft. Taken together, the results indicate that the length distortions in both the standard and single-angle M-L figures are not uniformly distributed along the shaft but rather are confined mainly to the quarters adjacent to the vertices. The present findings imply that theories of the M-L illusion which assume uniform expansion or contraction of the shafts are incomplete.

  6. Contact-free palm-vein recognition based on local invariant features.

    PubMed

    Kang, Wenxiong; Liu, Yang; Wu, Qiuxia; Yue, Xishun

    2014-01-01

    Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs), respectively, which demonstrate the effectiveness of the proposed approach.

  7. Contact-Free Palm-Vein Recognition Based on Local Invariant Features

    PubMed Central

    Kang, Wenxiong; Liu, Yang; Wu, Qiuxia; Yue, Xishun

    2014-01-01

    Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs), respectively, which demonstrate the effectiveness of the proposed approach. PMID:24866176

  8. Data on empirically estimated corporate survival rate in Russia.

    PubMed

    Kuzmin, Evgeny A

    2018-02-01

    The article presents data on the corporate survival rate in Russia in 1991-2014. The empirical survey was based on a random sample with the average number of non-repeated observations (number of companies) for the survey each year equal to 75,958 (24,236 minimum and 126,953 maximum). The actual limiting mean error ∆ p was 2.24% with 99% integrity. The survey methodology was based on a cross joining of various formal periods in the corporate life cycles (legal and business), which makes it possible to talk about a conventionally active time life of companies' existence with a number of assumptions. The empirical survey values were grouped by Russian regions and industries according to the classifier and consolidated into a single database for analysing the corporate life cycle and their survival rate and searching for deviation dependencies in calculated parameters. Preliminary and incomplete figures were available in the paper entitled "Survival Rate and Lifecycle in Terms of Uncertainty: Review of Companies from Russia and Eastern Europe" (Kuzmin and Guseva, 2016) [3]. The further survey led to filtered processed data with clerical errors excluded. These particular values are available in the article. The survey intended to fill a fact-based gap in various fundamental surveys that involved matters of the corporate life cycle in Russia within the insufficient statistical framework. The data are of interest for an analysis of Russian entrepreneurship, assessment of the market development and incorporation risks in the current business environment. A further heuristic potential is achievable through an ability of forecasted changes in business demography and model building based on the representative data set.

  9. POWER-ENHANCED MULTIPLE DECISION FUNCTIONS CONTROLLING FAMILY-WISE ERROR AND FALSE DISCOVERY RATES.

    PubMed

    Peña, Edsel A; Habiger, Joshua D; Wu, Wensong

    2011-02-01

    Improved procedures, in terms of smaller missed discovery rates (MDR), for performing multiple hypotheses testing with weak and strong control of the family-wise error rate (FWER) or the false discovery rate (FDR) are developed and studied. The improvement over existing procedures such as the Šidák procedure for FWER control and the Benjamini-Hochberg (BH) procedure for FDR control is achieved by exploiting possible differences in the powers of the individual tests. Results signal the need to take into account the powers of the individual tests and to have multiple hypotheses decision functions which are not limited to simply using the individual p -values, as is the case, for example, with the Šidák, Bonferroni, or BH procedures. They also enhance understanding of the role of the powers of individual tests, or more precisely the receiver operating characteristic (ROC) functions of decision processes, in the search for better multiple hypotheses testing procedures. A decision-theoretic framework is utilized, and through auxiliary randomizers the procedures could be used with discrete or mixed-type data or with rank-based nonparametric tests. This is in contrast to existing p -value based procedures whose theoretical validity is contingent on each of these p -value statistics being stochastically equal to or greater than a standard uniform variable under the null hypothesis. Proposed procedures are relevant in the analysis of high-dimensional "large M , small n " data sets arising in the natural, physical, medical, economic and social sciences, whose generation and creation is accelerated by advances in high-throughput technology, notably, but not limited to, microarray technology.

  10. Wireless clinical alerts and patient outcomes in the surgical intensive care unit.

    PubMed

    Major, Kevin; Shabot, M Michael; Cunneen, Scott

    2002-12-01

    Errors in medicine have gained public interest since the Institute of Medicine published its 1999 report on this subject. Although errors of commission are frequently cited, errors of omission can be equally serious. A computerized surgical intensive care unit (SICU) information system when coupled to an event-driven alerting engine has the potential to reduce errors of omission for critical intensive care unit events. Automated alerts and patient outcomes were prospectively collected for all patients admitted to a tertiary-care SICU for a 2-year period. During the study period 3,973 patients were admitted to the SICU and received 13,608 days of care. A total of 15,066 alert pages were sent including alerts for physiologic condition (6,163), laboratory data (4,951), blood gas (3,774), drug allergy (130), and toxic drug levels (48). Admission Simplified Acute Physiology Score and Acute Physiology and Chronic Health Evaluation II score, SICU lengths of stay, and overall mortality rates were significantly higher in patients who triggered the alerting system. Patients triggering the alert paging system were 49.4 times more likely to die in the SICU compared with patients who did not generate an alert. Even after transfer to floor care the patients who triggered the alerting system were 5.7 times more likely to die in the hospital. An alert page identifies patients who will stay in the SICU longer and have a significantly higher chance of death compared with patients who do not trigger the alerting system.

  11. Can the impact of gender equality on health be measured? A cross-sectional study comparing measures based on register data with individual survey-based data.

    PubMed

    Sörlin, Ann; Öhman, Ann; Ng, Nawi; Lindholm, Lars

    2012-09-17

    The aim of this study was to investigate potential associations between gender equality at work and self-rated health. 2861 employees in 21 companies were invited to participate in a survey. The mean response rate was 49.2%. The questionnaire contained 65 questions, mainly on gender equality and health. Two logistic regression analyses were conducted to assess associations between (i) self-rated health and a register-based company gender equality index (OGGI), and (ii) self-rated health and self-rated gender equality at work. Even though no association was found between the OGGI and health, women who rated their company as "completely equal" or "quite equal" had higher odds of reporting "good health" compared to women who perceived their company as "not equal" (OR = 2.8, 95% confidence interval = 1.4 - 5.5 and OR = 2.73, 95% CI = 1.6-4.6). Although not statistically significant, we observed the same trends in men. The results were adjusted for age, highest education level, income, full or part-time employment, and type of company based on the OGGI. No association was found between gender equality in companies, measured by register-based index (OGGI), and health. However, perceived gender equality at work positively affected women's self-rated health but not men's. Further investigations are necessary to determine whether the results are fully credible given the contemporary health patterns and positions in the labour market of women and men or whether the results are driven by selection patterns.

  12. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  13. Bias-field equalizer for bubble memories

    NASA Technical Reports Server (NTRS)

    Keefe, G. E.

    1977-01-01

    Magnetoresistive Perm-alloy sensor monitors bias field required to maintain bubble memory. Sensor provides error signal that, in turn, corrects magnitude of bias field. Error signal from sensor can be used to control magnitude of bias field in either auxiliary set of bias-field coils around permanent magnet field, or current in small coils used to remagnetize permanent magnet by infrequent, short, high-current pulse or short sequence of pulses.

  14. Using Thin-Film Thermometers as Heaters in Thermal Control Applications

    NASA Technical Reports Server (NTRS)

    Cho, Hyung J.; Penanen, Konstantin; Sukhatme, Kalyani G.; Holmes, Warren A.; Courts, Scott

    2010-01-01

    A cryogenic sensor maintains calibration at approximately equal to 4.2 K to better than 2 mK (< 0.5 percent resistance repeatability) after being heated to approximately equal 40 K with approximately equal 0.5 W power. The sensor withstands 4 W power dissipation when immersed in liquid nitrogen with verified resistance reproducibility of, at worst, 1 percent. The sensor maintains calibration to 0.1 percent after being heated with 1-W power at approximately equal 77 K for a period of 48 hours. When operated with a readout scheme that is capable of mitigating the self-heating calibration errors, this and similar sensors can be used for precision (mK stability) temperature control without the need of separate heaters and associated wiring/cabling.

  15. Performance Evaluation of MIMO-UWB Systems Using Measured Propagation Data and Proposal of Timing Control Scheme in LOS Environments

    NASA Astrophysics Data System (ADS)

    Takanashi, Masaki; Nishimura, Toshihiko; Ogawa, Yasutaka; Ohgane, Takeo

    Ultrawide-band impulse radio (UWB-IR) technology and multiple-input multiple-output (MIMO) systems have attracted interest regarding their use in next-generation high-speed radio communication. We have studied the use of MIMO ultrawide-band (MIMO-UWB) systems to enable higher-speed radio communication. We used frequency-domain equalization based on the minimum mean square error criterion (MMSE-FDE) to reduce intersymbol interference (ISI) and co-channel interference (CCI) in MIMO-UWB systems. Because UWB systems are expected to be used for short-range wireless communication, MIMO-UWB systems will usually operate in line-of-sight (LOS) environments and direct waves will be received at the receiver side. Direct waves have high power and cause high correlations between antennas in such environments. Thus, it is thought that direct waves will adversely affect the performance of spatial filtering and equalization techniques used to enhance signal detection. To examine the feasibility of MIMO-UWB systems, we conducted MIMO-UWB system propagation measurements in LOS environments. From the measurements, we found that the arrival time of direct waves from different transmitting antennas depends on the MIMO configuration. Because we can obtain high power from the direct waves, direct wave reception is critical for maximizing transmission performance. In this paper, we present our measurement results, and propose a way to improve performance using a method of transmit (Tx) and receive (Rx) timing control. We evaluate the bit error rate (BER) performance for this form of timing control using measured channel data.

  16. Learning to classify in large committee machines

    NASA Astrophysics Data System (ADS)

    O'kane, Dominic; Winther, Ole

    1994-10-01

    The ability of a two-layer neural network to learn a specific non-linearly-separable classification task, the proximity problem, is investigated using a statistical mechanics approach. Both the tree and fully connected architectures are investigated in the limit where the number K of hidden units is large, but still much smaller than the number N of inputs. Both have continuous weights. Within the replica symmetric ansatz, we find that for zero temperature training, the tree architecture exhibits a strong overtraining effect. For nonzero temperature the asymptotic error is lowered, but it is still higher than the corresponding value for the simple perceptron. The fully connected architecture is considered for two regimes. First, for a finite number of examples we find a symmetry among the hidden units as each performs equally well. The asymptotic generalization error is finite, and minimal for T-->∞ where it goes to the same value as for the simple perceptron. For a large number of examples we find a continuous transition to a phase with broken hidden-unit symmetry, which has an asymptotic generalization error equal to zero.

  17. A theory for predicting composite laminate warpage resulting from fabrication

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1974-01-01

    Linear laminate theory is used with the moment-curvature relationship to derive equations for predicting end deflections due to warpage without solving the coupled fourth-order partial differential equations of the plate. Composite micro- and macrohyphenmechanics are used with laminate theory to assess the contribution of factors such as ply misorientation, fiber migration, and fiber and/or void volume ratio nonuniformity on the laminate warpage. Using these equations, it was found that a 1 deg error in the orientation angle of one ply was sufficient to produce warpage end deflection equal to two laminate thicknesses in a 10 inch by 10 inch laminate made from 8 ply Mod-I/epoxy. Using a sensitivity analysis on the governing parameters, it was found that a 3 deg fiber migration or a void volume ratio of three percent in some plies is sufficient to produce laminate warpage corner deflection equal to several laminate thicknesses. Tabular and graphical data are presented which can be used to identify possible errors contributing to laminate warpage and/or to obtain an a priori assessment when unavoidable errors during fabrication are anticipated.

  18. Nonlinear filter based decision feedback equalizer for optical communication systems.

    PubMed

    Han, Xiaoqi; Cheng, Chi-Hao

    2014-04-07

    Nonlinear impairments in optical communication system have become a major concern of optical engineers. In this paper, we demonstrate that utilizing a nonlinear filter based Decision Feedback Equalizer (DFE) with error detection capability can deliver a better performance compared with the conventional linear filter based DFE. The proposed algorithms are tested in simulation using a coherent 100 Gb/sec 16-QAM optical communication system in a legacy optical network setting.

  19. Female equality and suicide in the Indian states.

    PubMed

    Mayer, Peter

    2003-06-01

    Indian suicide rates rose by 76% in the 10 years between 1984 and 1994. In this study of the 16 principal states of India, male and female suicide rates in 1994 were associated with measures of equal education for men and women. Male suicide rates were associated with equal life expectancy for men and women. Equal income for women and men was not associated with suicide rates. Unlike earlier studies, no inverse association was found between equal attainment in education and suicide sex ratios. The Indian findings thus do not conform to patterns found in more developed economies. Given increasing human development in India, it seems probable that suicide rates in that country may increase two to three times over coming decades.

  20. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  1. Differences among Job Positions Related to Communication Errors at Construction Sites

    NASA Astrophysics Data System (ADS)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  2. A step-up test procedure to find the minimum effective dose.

    PubMed

    Wang, Weizhen; Peng, Jianan

    2015-01-01

    It is of great interest to find the minimum effective dose (MED) in dose-response studies. A sequence of decreasing null hypotheses to find the MED is formulated under the assumption of nondecreasing dose response means. A step-up multiple test procedure that controls the familywise error rate (FWER) is constructed based on the maximum likelihood estimators for the monotone normal means. When the MED is equal to one, the proposed test is uniformly more powerful than Hsu and Berger's test (1999). Also, a simulation study shows a substantial power improvement for the proposed test over four competitors. Three R-codes are provided in Supplemental Materials for this article. Go to the publishers online edition of Journal of Biopharmaceutical Statistics to view the files.

  3. Identifying people from gait pattern with accelerometers

    NASA Astrophysics Data System (ADS)

    Ailisto, Heikki J.; Lindholm, Mikko; Mantyjarvi, Jani; Vildjiounaite, Elena; Makela, Satu-Marja

    2005-03-01

    Protecting portable devices is becoming more important, not only because of the value of the devices themselves, but for the value of the data in them and their capability for transactions, including m-commerce and m-banking. An unobtrusive and natural method for identifying the carrier of portable devices is presented. The method uses acceleration signals produced by sensors embedded in the portable device. When the user carries the device, the acceleration signal is compared with the stored template signal. The method consists of finding individual steps, normalizing and averaging them, aligning them with the template and computing cross-correlation, which is used as a measure of similarity. Equal Error Rate of 6.4% is achieved in tentative experiments with 36 test subjects.

  4. Visualization and statistical comparisons of microbial communities using R packages on Phylochip data.

    PubMed

    Holmes, Susan; Alekseyenko, Alexander; Timme, Alden; Nelson, Tyrrell; Pasricha, Pankaj Jay; Spormann, Alfred

    2011-01-01

    This article explains the statistical and computational methodology used to analyze species abundances collected using the LNBL Phylochip in a study of Irritable Bowel Syndrome (IBS) in rats. Some tools already available for the analysis of ordinary microarray data are useful in this type of statistical analysis. For instance in correcting for multiple testing we use Family Wise Error rate control and step-down tests (available in the multtest package). Once the most significant species are chosen we use the hypergeometric tests familiar for testing GO categories to test specific phyla and families. We provide examples of normalization, multivariate projections, batch effect detection and integration of phylogenetic covariation, as well as tree equalization and robustification methods.

  5. Error Correction, Control Systems and Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Smith, Earl B.

    2004-01-01

    This paper will be a discussion on dealing with errors. While error correction and communication is important when dealing with spacecraft vehicles, the issue of control system design is also important. There will be certain commands that one wants a motion device to execute. An adequate control system will be necessary to make sure that the instruments and devices will receive the necessary commands. As it will be discussed later, the actual value will not always be equal to the intended or desired value. Hence, an adequate controller will be necessary so that the gap between the two values will be closed.

  6. Annual survival and recruitment in a Ruby-throated Hummingbird population, excluding the effect of transient individuals

    USGS Publications Warehouse

    Hilton, B.; Miller, M.W.

    2003-01-01

    We estimated annual apparent survival, recruitment, and rate of population growth of breeding Ruby-throated Hummingbirds (Archilochus colubris), while controlling for transients, by using 18 years of capture-mark-recapture data collected during 1984-2001 at Hilton Pond Center for Piedmont Natural History near York, South Carolina. Resident males had lower apparent survival (0.30 +/- 0.05 SE) than females (0.43 +/- 0.04). Estimates of apparent survival did not differ by age. Point estimates suggested that newly banded males were less likely than females to be residents, but standard errors of these estimates overlapped (males: 0.60 +/- 0.14 SE; females: 0.67 +/- 0.09). Estimated female recruitment was 0.60 +/- 0.06 SE, meaning that 60% of adult females present in any given year had entered the population during the previous year. Our estimate for rate of change indicated the population of female hummingbirds was stable during the study period (1.04 +/- 0.04 SE). We suggest an annual goal of greater than or equal to 64 adult females and greater than or equal to 64 immature females released per banding area to enable rigorous future tests for effects of covariates on population dynamics. Development of a broader cooperating network of hummingbird banders in eastern North America could allow tests for regional or metapopulation dynamics in this species.

  7. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  8. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  9. Integrated Data and Control Level Fault Tolerance Techniques for Signal Processing Computer Design

    DTIC Science & Technology

    1990-09-01

    TOLERANCE TECHNIQUES FOR SIGNAL PROCESSING COMPUTER DESIGN G. Robert Redinbo I. INTRODUCTION High-speed signal processing is an important application of...techniques and mathematical approaches will be expanded later to the situation where hardware errors and roundoff and quantization noise affect all...detect errors equal in number to the degree of g(X), the maximum permitted by the Singleton bound [13]. Real cyclic codes, primarily applicable to

  10. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  11. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  12. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  13. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  14. A tight Cramér-Rao bound for joint parameter estimation with a pure two-mode squeezed probe

    NASA Astrophysics Data System (ADS)

    Bradshaw, Mark; Assad, Syed M.; Lam, Ping Koy

    2017-08-01

    We calculate the Holevo Cramér-Rao bound for estimation of the displacement experienced by one mode of an two-mode squeezed vacuum state with squeezing r and find that it is equal to 4 exp ⁡ (- 2 r). This equals the sum of the mean squared error obtained from a dual homodyne measurement, indicating that the bound is tight and that the dual homodyne measurement is optimal.

  15. Mass predictions from the Garvey-Kelson mass relations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaenecke, J.; Masson, P.J.

    Part A: The transverse Garvey-Kelson mass relation represents a homogeneous third-order partial difference equation. Procedures are described for estimating masses of nuclei with Ngreater than or equal toZ from the most general solution of this difference equation subject to a chi/sup 2/ minimization, using the recent atomic mass adjustment of Wapstra, Audi, and Hoekstra as a boundary condition. A judicious division of the input data in subsets of neutron-rich and proton-rich nuclei had to be introduced to reduce systematic errors in long-range extrapolations. Approximately 5600 mass-excess values for nuclei with 2less than or equal toZless than or equal to103, 4lessmore » than or equal toNless than or equal to157, and Ngreater than or equal toZ (except N = Z odd for A<40) have been calculated. The standard deviation for reproducing the known mass-excess values is sigma/sub m/approx. =103 keV.« less

  16. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    NASA Astrophysics Data System (ADS)

    Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong

    2008-12-01

    Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  17. An educational and audit tool to reduce prescribing error in intensive care.

    PubMed

    Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D

    2008-10-01

    To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.

  18. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    PubMed

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  19. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  20. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  1. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    PubMed

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  2. On the effect of surface emissivity on temperature retrievals. [for meteorology

    NASA Technical Reports Server (NTRS)

    Kornfield, J.; Susskind, J.

    1977-01-01

    The paper is concerned with errors in temperature retrieval caused by incorrectly assuming that surface emissivity is equal to unity. An error equation that applies to present-day atmospheric temperature sounders is derived, and the bias errors resulting from various emissivity discrepancies are calculated. A model of downward flux is presented and used to determine the effective downward flux. In the 3.7-micron region of the spectrum, emissivities of 0.6 to 0.9 have been observed over land. At a surface temperature of 290 K, if the true emissivity is 0.6 and unit emissivity is assumed, the error would be approximately 11 C. In the 11-micron region, the maximum deviation of the surface emissivity from unity was 0.05.

  3. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis.

    PubMed

    Austin, Peter C

    2016-12-30

    Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  4. Lod scores for gene mapping in the presence of marker map uncertainty.

    PubMed

    Stringham, H M; Boehnke, M

    2001-07-01

    Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.

  5. Evaluating true BCI communication rate through mutual information and language models.

    PubMed

    Speier, William; Arnold, Corey; Pouratian, Nader

    2013-01-01

    Brain-computer interface (BCI) systems are a promising means for restoring communication to patients suffering from "locked-in" syndrome. Research to improve system performance primarily focuses on means to overcome the low signal to noise ratio of electroencephalogric (EEG) recordings. However, the literature and methods are difficult to compare due to the array of evaluation metrics and assumptions underlying them, including that: 1) all characters are equally probable, 2) character selection is memoryless, and 3) errors occur completely at random. The standardization of evaluation metrics that more accurately reflect the amount of information contained in BCI language output is critical to make progress. We present a mutual information-based metric that incorporates prior information and a model of systematic errors. The parameters of a system used in one study were re-optimized, showing that the metric used in optimization significantly affects the parameter values chosen and the resulting system performance. The results of 11 BCI communication studies were then evaluated using different metrics, including those previously used in BCI literature and the newly advocated metric. Six studies' results varied based on the metric used for evaluation and the proposed metric produced results that differed from those originally published in two of the studies. Standardizing metrics to accurately reflect the rate of information transmission is critical to properly evaluate and compare BCI communication systems and advance the field in an unbiased manner.

  6. 40 CFR 63.6611 - By what date must I conduct the initial performance tests or other initial compliance...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... stationary RICE with a site rating of greater than or equal to 250 and less than or equal to 500 brake HP... demonstrations if I own or operate a 4SLB SI stationary RICE with a site rating of greater than or equal to 250... operate a new or reconstructed 4SLB stationary RICE with a site rating of greater than or equal to 250 and...

  7. 5 CFR 575.507 - What is the maximum extended assignment incentive that may be paid for a period of service?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... greater of— (1) An amount equal to 25 percent of the annual rate of basic pay of the employee at the... periods equals 546 days, and 546 days divided by 365 days equals 1.50 years. ... rate employees who do not have a scheduled annual rate of basic pay, the annual rate in paragraph (a...

  8. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  9. Adaptive software-defined coded modulation for ultra-high-speed optical transport

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Zhang, Yequn

    2013-10-01

    In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.

  10. A randomised controlled trial of cognitive aids for emergency airway equipment preparation in a Paediatric Emergency Department.

    PubMed

    Long, Elliot; Fitzpatrick, Patrick; Cincotta, Domenic R; Grindlay, Joanne; Barrett, Michael Joseph

    2016-01-27

    Safety of emergency intubation may be improved by standardising equipment preparation; the efficacy of cognitive aids is unknown. This randomised controlled trial compared no cognitive aid (control) with the use of a checklist or picture template for emergency airway equipment preparation in the Emergency Department of The Royal Children's Hospital, Melbourne. Sixty-three participants were recruited, 21 randomised to each group. Equal numbers of nursing, junior medical, and senior medical staff were included in each group. Compared to controls, the checklist or template group had significantly lower equipment omission rates (median 30% IQR 20-40% control, median 10% IQR 5-10 % checklist, median 10% IQR 5-20% template; p < 0.05). The combined omission rate and sizing error rate was lower using a checklist or template (median 35 % IQR 30-45 % control, median 15% IQR 10-20% checklist, median 15% IQR 10-30% template; p < 0.05). The template group had less variation in equipment location compared to checklist or controls. There was no significant difference in preparation time in controls (mean 3 min 14 s sd 56 s) compared to checklist (mean 3 min 46 s sd 1 min 15 s) or template (mean 3 min 6 s sd 49 s; p = 0.06). Template use reduces variation in airway equipment location during preparation foremergency intubation, with an equivalent reduction in equipment omission rate to the use of a checklist. The use of a template for equipment preparation and a checklist for team, patient, and monitoring preparation may provide the best combination of both cognitive aids. The use of a cognitive aid for emergency airway equipment preparation reduces errors of omission. Template utilisation reduces variation in equipment location. Australian and New Zealand Trials Registry (ACTRN12615000541505).

  11. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  12. 40 CFR 63.6601 - What emission limitations must I meet if I own or operate a 4SLB stationary RICE with a site...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... if I own or operate a 4SLB stationary RICE with a site rating of greater than or equal to 250 brake... RICE with a site rating of greater than or equal to 250 brake HP and less than 500 brake HP located at... reconstructed 4SLB stationary RICE with a site rating of greater than or equal to 250 and less than or equal to...

  13. Fire ants perpetually rebuild sinking towers.

    PubMed

    Phonekeo, Sulisay; Mlot, Nathan; Monaenkova, Daria; Hu, David L; Tovey, Craig

    2017-07-01

    In the aftermath of a flood, fire ants, Solenopsis invicta , cluster into temporary encampments. The encampments can contain hundreds of thousands of ants and reach over 30 ants high. How do ants build such tall structures without being crushed? In this combined experimental and theoretical study, we investigate the shape and rate of construction of ant towers around a central support. The towers are bell shaped, consistent with towers of constant strength such as the Eiffel tower, where each element bears an equal load. However, unlike the Eiffel tower, the ant tower is built through a process of trial and error, whereby failed portions avalanche until the final shape emerges. High-speed and novel X-ray videography reveal that the tower constantly sinks and is rebuilt, reminiscent of large multicellular systems such as human skin. We combine the behavioural rules that produce rafts on water with measurements of adhesion and attachment strength to model the rate of growth of the tower. The model correctly predicts that the growth rate decreases as the support diameter increases. This work may inspire the design of synthetic swarms capable of building in vertical layers.

  14. Fire ants perpetually rebuild sinking towers

    NASA Astrophysics Data System (ADS)

    Phonekeo, Sulisay; Mlot, Nathan; Monaenkova, Daria; Hu, David L.; Tovey, Craig

    2017-07-01

    In the aftermath of a flood, fire ants, Solenopsis invicta, cluster into temporary encampments. The encampments can contain hundreds of thousands of ants and reach over 30 ants high. How do ants build such tall structures without being crushed? In this combined experimental and theoretical study, we investigate the shape and rate of construction of ant towers around a central support. The towers are bell shaped, consistent with towers of constant strength such as the Eiffel tower, where each element bears an equal load. However, unlike the Eiffel tower, the ant tower is built through a process of trial and error, whereby failed portions avalanche until the final shape emerges. High-speed and novel X-ray videography reveal that the tower constantly sinks and is rebuilt, reminiscent of large multicellular systems such as human skin. We combine the behavioural rules that produce rafts on water with measurements of adhesion and attachment strength to model the rate of growth of the tower. The model correctly predicts that the growth rate decreases as the support diameter increases. This work may inspire the design of synthetic swarms capable of building in vertical layers.

  15. Fire ants perpetually rebuild sinking towers

    PubMed Central

    Phonekeo, Sulisay; Mlot, Nathan; Monaenkova, Daria; Tovey, Craig

    2017-01-01

    In the aftermath of a flood, fire ants, Solenopsis invicta, cluster into temporary encampments. The encampments can contain hundreds of thousands of ants and reach over 30 ants high. How do ants build such tall structures without being crushed? In this combined experimental and theoretical study, we investigate the shape and rate of construction of ant towers around a central support. The towers are bell shaped, consistent with towers of constant strength such as the Eiffel tower, where each element bears an equal load. However, unlike the Eiffel tower, the ant tower is built through a process of trial and error, whereby failed portions avalanche until the final shape emerges. High-speed and novel X-ray videography reveal that the tower constantly sinks and is rebuilt, reminiscent of large multicellular systems such as human skin. We combine the behavioural rules that produce rafts on water with measurements of adhesion and attachment strength to model the rate of growth of the tower. The model correctly predicts that the growth rate decreases as the support diameter increases. This work may inspire the design of synthetic swarms capable of building in vertical layers. PMID:28791170

  16. Effect of bar-code technology on the safety of medication administration.

    PubMed

    Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K

    2010-05-06

    Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society

  17. Battery Cell By-Pass Circuit

    NASA Technical Reports Server (NTRS)

    Mumaw, Susan J. (Inventor); Evers, Jeffrey (Inventor); Craig, Calvin L., Jr. (Inventor); Walker, Stuart D. (Inventor)

    2001-01-01

    The invention is a circuit and method of limiting the charging current voltage from a power supply net work applied to an individual cell of a plurality of cells making up a battery being charged in series. It is particularly designed for use with batteries that can be damaged by overcharging, such as Lithium-ion type batteries. In detail. the method includes the following steps: 1) sensing the actual voltage level of the individual cell; 2) comparing the actual voltage level of the individual cell with a reference value and providing an error signal representative thereof; and 3) by-passing the charging current around individual cell necessary to keep the individual cell voltage level generally equal a specific voltage level while continuing to charge the remaining cells. Preferably this is accomplished by by-passing the charging current around the individual cell if said actual voltage level is above the specific voltage level and allowing the charging current to the individual cell if the actual voltage level is equal or less than the specific voltage level. In the step of bypassing the charging current, the by-passed current is transferred at a proper voltage level to the power supply. The by-pass circuit a voltage comparison circuit is used to compare the actual voltage level of the individual cell with a reference value and to provide an error signal representative thereof. A third circuit, designed to be responsive to the error signal, is provided for maintaining the individual cell voltage level generally equal to the specific voltage level. Circuitry is provided in the third circuit for bypassing charging current around the individual cell if the actual voltage level is above the specific voltage level and transfers the excess charging current to the power supply net work. The circuitry also allows charging of the individual cell if the actual voltage level is equal or less than the specific voltage level.

  18. Development and implementation of a human accuracy program in patient foodservice.

    PubMed

    Eden, S H; Wood, S M; Ptak, K M

    1987-04-01

    For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.

  19. Toward developing a standardized Arabic continuous text reading chart.

    PubMed

    Alabdulkader, Balsam; Leat, Susan Jennifer

    Near visual acuity is an essential measurement during an oculo-visual assessment. Short duration continuous text reading charts measure reading acuity and other aspects of reading performance. There is no standardized version of such chart in Arabic. The aim of this study is to create sentences of equal readability to use in the development of a standardized Arabic continuous text reading chart. Initially, 109 Arabic pairs of sentences were created for use in constructing a chart with similar layout to the Colenbrander chart. They were created to have the same grade level of difficulty and physical length. Fifty-three adults and sixteen children were recruited to validate the sentences. Reading speed in correct words per minute (CWPM) and standard length words per minute (SLWPM) was measured and errors were counted. Criteria based on reading speed and errors made in each sentence pair were used to exclude sentence pairs with more outlying characteristics, and to select the final group of sentence pairs. Forty-five sentence pairs were selected according to the elimination criteria. For adults, the average reading speed for the final sentences was 166 CWPM and 187 SLWPM and the average number of errors per sentence pair was 0.21. Childrens' average reading speed for the final group of sentences was 61 CWPM and 72 SLWPM. Their average error rate was 1.71. The reliability analysis showed that the final 45 sentence pairs are highly comparable. They will be used in constructing an Arabic short duration continuous text reading chart. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  20. Additive Runge-Kutta Schemes for Convection-Diffusion-Reaction Equations

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    2001-01-01

    Additive Runge-Kutta (ARK) methods are investigated for application to the spatially discretized one-dimensional convection-diffusion-reaction (CDR) equations. First, accuracy, stability, conservation, and dense output are considered for the general case when N different Runge-Kutta methods are grouped into a single composite method. Then, implicit-explicit, N = 2, additive Runge-Kutta ARK2 methods from third- to fifth-order are presented that allow for integration of stiff terms by an L-stable, stiffly-accurate explicit, singly diagonally implicit Runge-Kutta (ESDIRK) method while the nonstiff terms are integrated with a traditional explicit Runge-Kutta method (ERK). Coupling error terms are of equal order to those of the elemental methods. Derived ARK2 methods have vanishing stability functions for very large values of the stiff scaled eigenvalue, z(exp [I]) goes to infinity, and retain high stability efficiency in the absence of stiffness, z(exp [I]) goes to zero. Extrapolation-type stage-value predictors are provided based on dense-output formulae. Optimized methods minimize both leading order ARK2 error terms and Butcher coefficient magnitudes as well as maximize conservation properties. Numerical tests of the new schemes on a CDR problem show negligible stiffness leakage and near classical order convergence rates. However, tests on three simple singular-perturbation problems reveal generally predictable order reduction. Error control is best managed with a PID-controller. While results for the fifth-order method are disappointing, both the new third- and fourth-order methods are at least as efficient as existing ARK2 methods while offering error control and stage-value predictors.

  1. The influence of the structure and culture of medical group practices on prescription drug errors.

    PubMed

    Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer

    2005-08-01

    This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.

  2. Sample size determination in combinatorial chemistry.

    PubMed Central

    Zhao, P L; Zambias, R; Bolognese, J A; Boulton, D; Chapman, K

    1995-01-01

    Combinatorial chemistry is gaining wide appeal as a technique for generating molecular diversity. Among the many combinatorial protocols, the split/recombine method is quite popular and particularly efficient at generating large libraries of compounds. In this process, polymer beads are equally divided into a series of pools and each pool is treated with a unique fragment; then the beads are recombined, mixed to uniformity, and redivided equally into a new series of pools for the subsequent couplings. The deviation from the ideal equimolar distribution of the final products is assessed by a special overall relative error, which is shown to be related to the Pearson statistic. Although the split/recombine sampling scheme is quite different from those used in analysis of categorical data, the Pearson statistic is shown to still follow a chi2 distribution. This result allows us to derive the required number of beads such that, with 99% confidence, the overall relative error is controlled to be less than a pregiven tolerable limit L1. In this paper, we also discuss another criterion, which determines the required number of beads so that, with 99% confidence, all individual relative errors are controlled to be less than a pregiven tolerable limit L2 (0 < L2 < 1). PMID:11607586

  3. Evaluation of multiple-channel OFDM based airborne ultrasonic communications.

    PubMed

    Jiang, Wentao; Wright, William M D

    2016-09-01

    Orthogonal frequency division multiplexing (OFDM) modulation has been extensively used in both wired and wireless communication systems. The use of OFDM technology allows very high spectral efficiency data transmission without using complex equalizers to correct the effect of a frequency-selective channel. This work investigated OFDM methods in an airborne ultrasonic communication system, using commercially available capacitive ultrasonic transducers operating at 50kHz to transmit information through the air. Conventional modulation schemes such as binary phase shift keying (BPSK) and quadrature amplitude modulation (QAM) were used to modulate sub-carrier signals, and the performances were evaluated in an indoor laboratory environment. Line-of-sight (LOS) transmission range up to 11m with no measurable errors was achieved using BPSK at a data rate of 45kb/s and a spectral efficiency of 1b/s/Hz. By implementing a higher order modulation scheme (16-QAM), the system data transfer rate was increased to 180kb/s with a spectral efficiency of 4b/s/Hz at attainable transmission distances up to 6m. Diffraction effects were incorporated into a model of the ultrasonic channel that also accounted for beam spread and attenuation in air. The simulations were a good match to the measured signals and non-LOS signals could be demodulated successfully. The effects of multipath interference were also studied in this work. By adding cyclic prefix (CP) to the OFDM symbols, the bit error rate (BER) performance was significantly improved in a multipath environment. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Automated segmentation of geographic atrophy using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Wang, Ziyuan; Sadda, SriniVas R.

    2018-02-01

    Geographic atrophy (GA) is an end-stage manifestation of the advanced age-related macular degeneration (AMD), the leading cause of blindness and visual impairment in developed nations. Techniques to rapidly and precisely detect and quantify GA would appear to be of critical importance in advancing the understanding of its pathogenesis. In this study, we develop an automated supervised classification system using deep convolutional neural networks (CNNs) for segmenting GA in fundus autofluorescene (FAF) images. More specifically, to enhance the contrast of GA relative to the background, we apply the contrast limited adaptive histogram equalization. Blood vessels may cause GA segmentation errors due to similar intensity level to GA. A tensor-voting technique is performed to identify the blood vessels and a vessel inpainting technique is applied to suppress the GA segmentation errors due to the blood vessels. To handle the large variation of GA lesion sizes, three deep CNNs with three varying sized input image patches are applied. Fifty randomly chosen FAF images are obtained from fifty subjects with GA. The algorithm-defined GA regions are compared with manual delineation by a certified grader. A two-fold cross-validation is applied to evaluate the algorithm performance. The mean segmentation accuracy, true positive rate (i.e. sensitivity), true negative rate (i.e. specificity), positive predictive value, false discovery rate, and overlap ratio, between the algorithm- and manually-defined GA regions are 0.97 +/- 0.02, 0.89 +/- 0.08, 0.98 +/- 0.02, 0.87 +/- 0.12, 0.13 +/- 0.12, and 0.79 +/- 0.12 respectively, demonstrating a high level of agreement.

  5. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  6. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061

  7. A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication.

    PubMed

    Yang, Ching-Han; Chang, Chin-Chun; Liang, Deron

    2018-03-28

    All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM)-based method that can improve the traditional GMM in modeling driving behavior. This new method can be applied to build a better driver authentication system based on the accelerometer and orientation sensor of a smartwatch. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzes driving behavior using the built-in sensors of a smartwatch. The experimental results for driver authentication-an equal error rate (EER) of 4.62% in the simulated environment and an EER of 7.86% in the real-traffic environment-confirm the feasibility of this approach.

  8. Online 3D Ear Recognition by Combining Global and Local Features.

    PubMed

    Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David

    2016-01-01

    The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.

  9. Online 3D Ear Recognition by Combining Global and Local Features

    PubMed Central

    Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David

    2016-01-01

    The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%. PMID:27935955

  10. Finding intonational boundaries using acoustic cues related to the voice source

    NASA Astrophysics Data System (ADS)

    Choi, Jeung-Yoon; Hasegawa-Johnson, Mark; Cole, Jennifer

    2005-10-01

    Acoustic cues related to the voice source, including harmonic structure and spectral tilt, were examined for relevance to prosodic boundary detection. The measurements considered here comprise five categories: duration, pitch, harmonic structure, spectral tilt, and amplitude. Distributions of the measurements and statistical analysis show that the measurements may be used to differentiate between prosodic categories. Detection experiments on the Boston University Radio Speech Corpus show equal error detection rates around 70% for accent and boundary detection, using only the acoustic measurements described, without any lexical or syntactic information. Further investigation of the detection results shows that duration and amplitude measurements, and, to a lesser degree, pitch measurements, are useful for detecting accents, while all voice source measurements except pitch measurements are useful for boundary detection.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sidlauskas, D.P.

    A new type of biometric identifier which utilizes hand outline measurements made in three dimensions is described. This device uses solid state imaging with no moving parts. The important characteristics of accuracy, speed, user tolerability, small template size, low power, portability and reliability are discussed. A complete stand-alone biometric access control station with sufficient memory for 10,000 users and weighing less than 10 pounds has been built and tested. A test was conducted involving daily use by 112 users over a seven week period during which over 6300 access attempts were made. The single try equal error rate was foundmore » to be 0.4%. There were no false rejects when three tries were allowed before access was denied. Defeat with an artifact is difficult because the hand must be copied in all three dimensions.« less

  12. Accuracy of Estimating Solar Radiation Pressure for GEO Debris with Tumbling Effect

    NASA Astrophysics Data System (ADS)

    Chao, Chia-Chun George

    2009-03-01

    The accuracy of estimating solar radiation pressure for GEO debris is examined and demonstrated, via numerical simulations, by fitting a batch (months) of simulated position vectors. These simulated position vectors are generated from a "truth orbit" with added white noise using high-precision numerical integration tools. After the long-arc fit of the simulated observations (position vectors), one can accurately and reliably determine how close the estimated value of solar radiation pressure is to the truth. Results of this study show that the inherent accuracy in estimating the solar radiation pressure coefficient can be as good as 1% if a long-arc fit span up to 180 days is used and the satellite is not tumbling. The corresponding position prediction accuracy can be as good as, in maximum error, 1 km along in-track, 0.3 km along radial and 0.1 km along cross-track up to 30 days. Similar accuracies can be expected when the object is tumbling as long as the rate of attitude change is different from the orbit rate. Results of this study reveal an important phenomenon that the solar radiation pressure significantly affects the orbit motion when the spin rate is equal to the orbit rate.

  13. A comparison of visual and kinesthetic-tactual displays for compensatory tracking

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.

    1983-01-01

    Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under certain conditions it can be an effective alternative or supplement to visual displays. In order to understand better how KT tracking compares with visual tracking, both a critical tracking and stationary single-axis tracking tasks were conducted with and without velocity quickening. In the critical tracking task, the visual displays were superior, however, the quickened KT display was approximately equal to the unquickened visual display. In stationary tracking tasks, subjects adopted lag equalization with the quickened KT and visual displays, and mean-squared error scores were approximately equal. With the unquickened displays, subjects adopted lag-lead equalization, and the visual displays were superior. This superiority was partly due to the servomotor lag in the implementation of the KT display and partly due to modality differences.

  14. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    PubMed Central

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033

  15. Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps

    DOE PAGES

    Isotalo, Aarno; Pusa, Maria

    2016-05-01

    The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less

  16. Differences in Steady-State Net Ammonium and Nitrate Influx by Cold- and Warm-Adapted Barley Varieties 1

    PubMed Central

    Bloom, Arnold J.; Chapin, F. Stuart

    1981-01-01

    A flowing nutrient culture system permitted relatively rapid determination of the steady-state net nitrogen influx by an intact barley (Hardeum vulgare L. cv Kombar and Olli) plant. Ion-selective electrodes monitored the depletion of ammonium and nitrate from a nutrient solution after a single pass through a root cuvette. Influx at concentrations as low as 4 micromolar was measured. Standard errors for a sample size of three plants were typically less than 10% of the mean. When grown under identical conditions, a variety of barley bred for cold soils had higher nitrogen influx rates at low concentrations and low temperatures than one bred for warm soils, whereas the one bred for warm soils had higher influx rates at high concentrations and high temperatures. Ammonium was more readily absorbed than nitrate by both varieties at all concentrations and temperatures tested. Ammonium and nitrate influx in both varieties were equally inhibited by low temperatures. PMID:16662052

  17. Automatic Selection of Clinical Trials Based on A Semantic Web Approach.

    PubMed

    Cuggia, Marc; Campillo-Gimenez, Boris; Bouzille, Guillaume; Besana, Paolo; Jouini, Wassim; Dufour, Jean-Charles; Zekri, Oussama; Gibaud, Isabelle; Garde, Cyril; Duvauferier, Regis

    2015-01-01

    Recruitment of patients in clinical trials is nowadays preoccupying, as the inclusion rate is particularly low. The main identified factors are the multiplicity of open clinical trials, the high number and complexity of eligibility criteria, and the additional workload that a systematic search of the clinical trials a patient could be enrolled in for a physician. The principal objective of the ASTEC project is to automate the prescreening phase during multidisciplinary meetings (MDM). This paper presents the evaluation of a computerized recruitment support systems (CRSS) based on semantic web approach. The evaluation of the system was based on data collected retrospectively from a 6 month period of MDM in Urology and on 4 clinical trials of prostate cancer. The classification performance of the ASTEC system had a precision of 21%, recall of 93%, and an error rate equal to 37%. Missing data was the main issue encountered. The system was designed to be both scalable to other clinical domains and usable during MDM process.

  18. The acquisition of conditioned responding.

    PubMed

    Harris, Justin A

    2011-04-01

    This report analyzes the acquisition of conditioned responses in rats trained in a magazine approach paradigm. Following the suggestion by Gallistel, Fairhurst, and Balsam (2004), Weibull functions were fitted to the trial-by-trial response rates of individual rats. These showed that the emergence of responding was often delayed, after which the response rate would increase relatively gradually across trials. The fit of the Weibull function to the behavioral data of each rat was equaled by that of a cumulative exponential function incorporating a response threshold. Thus, the growth in conditioning strength on each trial can be modeled by the derivative of the exponential--a difference term of the form used in many models of associative learning (e.g., Rescorla & Wagner, 1972). Further analyses, comparing the acquisition of responding with a continuously reinforced stimulus (CRf) and a partially reinforced stimulus (PRf), provided further evidence in support of the difference term. In conclusion, the results are consistent with conventional models that describe learning as the growth of associative strength, incremented on each trial by an error-correction process.

  19. Improved Hip-Based Individual Recognition Using Wearable Motion Recording Sensor

    NASA Astrophysics Data System (ADS)

    Gafurov, Davrondzhon; Bours, Patrick

    In todays society the demand for reliable verification of a user identity is increasing. Although biometric technologies based on fingerprint or iris can provide accurate and reliable recognition performance, they are inconvenient for periodic or frequent re-verification. In this paper we propose a hip-based user recognition method which can be suitable for implicit and periodic re-verification of the identity. In our approach we use a wearable accelerometer sensor attached to the hip of the person, and then the measured hip motion signal is analysed for identity verification purposes. The main analyses steps consists of detecting gait cycles in the signal and matching two sets of detected gait cycles. Evaluating the approach on a hip data set consisting of 400 gait sequences (samples) from 100 subjects, we obtained equal error rate (EER) of 7.5% and identification rate at rank 1 was 81.4%. These numbers are improvements by 37.5% and 11.2% respectively of the previous study using the same data set.

  20. Novel Estimation of Pilot Performance Characteristics

    NASA Technical Reports Server (NTRS)

    Bachelder, Edward N.; Aponso, Bimal

    2017-01-01

    Two mechanisms internal to the pilot that affect performance during a tracking task are: 1) Pilot equalization (i.e. lead/lag); and 2) Pilot gain (i.e. sensitivity to the error signal). For some applications McRuer's Crossover Model can be used to anticipate what equalization will be employed to control a vehicle's dynamics. McRuer also established approximate time delays associated with different types of equalization - the more cognitive processing that is required due to equalization difficulty, the larger the time delay. However, the Crossover Model does not predict what the pilot gain will be. A nonlinear pilot control technique, observed and coined by the authors as 'amplitude clipping', is shown to improve stability, performance, and reduce workload when employed with vehicle dynamics that require high lead compensation by the pilot. Combining linear and nonlinear methods a novel approach is used to measure the pilot control parameters when amplitude clipping is present, allowing precise measurement in real time of key pilot control parameters. Based on the results of an experiment which was designed to probe workload primary drivers, a method is developed that estimates pilot spare capacity from readily observable measures and is tested for generality using multi-axis flight data. This paper documents the initial steps to developing a novel, simple objective metric for assessing pilot workload and its variation over time across a wide variety of tasks. Additionally, it offers a tangible, easily implementable methodology for anticipating a pilot's operating parameters and workload, and an effective design tool. The model shows promise in being able to precisely predict the actual pilot settings and workload, and observed tolerance of pilot parameter variation over the course of operation. Finally, an approach is proposed for generating Cooper-Harper ratings based on the workload and parameter estimation methodology.

  1. Economic evaluation of pregnancy diagnosis in dairy cattle: a decision analysis approach.

    PubMed

    Oltenacu, P A; Ferguson, J D; Lednor, A J

    1990-10-01

    Cost-benefit evaluations of several pregnancy diagnosis schemes were performed. The strategy using on-farm milk progesterone test on d 19 after service, followed by treatment of nonpregnant cows with prostaglandin, was the most profitable returning $10.50 per cow above the cost of the intervention. An increase in efficiency of detection of estrus of greater than 20% among cows diagnosed nonpregnant and an error rate in pregnancy diagnosis of less than or equal to 3% were needed to ensure profitability. Pregnancy diagnosis by uterine palpation per rectum on d 35 after service, combined with the use of pressure-sensitive mounting devices on nonpregnant cows was the second most profitable strategy and returned $5.10 per cow. An increase in efficiency of detection of estrus of greater than or equal to 20% was required to ensure profitability. Embryonic mortality was also critical and an increase from a baseline value of 10% to 12%, as a result of early uterine palpation, made this scheme unprofitable ($-4.80 per cow). Pregnancy diagnosis by uterine palpation per rectum at 50 or 65 d was less profitable, with a return of $2.50 and $.10 per cow, respectively.

  2. Position-based coding and convex splitting for private communication over quantum channels

    NASA Astrophysics Data System (ADS)

    Wilde, Mark M.

    2017-10-01

    The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.

  3. From "five" to 5 for 5 minutes: Arabic number transcoding as a short, specific, and sensitive screening tool for mathematics learning difficulties.

    PubMed

    Moura, Ricardo; Lopes-Silva, Júlia Beatriz; Vieira, Laura Rodrigues; Paiva, Giulia Moreira; Prado, Ana Carolina de Almeida; Wood, Guilherme; Haase, Vitor Geraldi

    2015-02-01

    Number transcoding (e.g., writing 29 when hearing "twenty-nine") is one of the most basic numerical abilities required in daily life and is paramount for mathematics achievement. The aim of this study is to investigate psychometric properties of an Arabic number-writing task and its capacity to identify children with mathematics difficulties. We assessed 786 children (55% girls) from first to fourth grades, who were classified as children with mathematics difficulties (n = 103) or controls (n = 683). Although error rates were low, the task presented adequate internal consistency (0.91). Analyses revealed effective diagnostic accuracy in first and second school grades (specificity equals to 0.67 and 0.76 respectively, and sensitivity equals to 0.70 and 0.88 respectively). Moreover, items tapping the understanding of place-value syntax were the most sensitive to mathematics achievement. Overall, we propose that number transcoding is a useful tool for the assessment of mathematics abilities in early elementary school. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Dispensing error rate after implementation of an automated pharmacy carousel system.

    PubMed

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  5. Follow the Leader Tracking by Autonomous Underwater Vehicles (AUVs) Using Acoustic Communications and Ranging

    DTIC Science & Technology

    2003-09-01

    590-595, September 1996. Deitel , H.M., Deitel , P.J., Nieto, T.R., Lin, T.M., Sadhu, P., XML: How to Program , Prentice Hall, 2001. Du, Y...communications will result in a total track following error equal to the sum of the errors for the two vehicles........48 xv Figure 36. Test point programming ...Refer to (Hunter 2000), ( Deitel 2001), or similar references for additional information regarding the XML standard. Figure 17. XML example

  6. Local projection stabilization for linearized Brinkman-Forchheimer-Darcy equation

    NASA Astrophysics Data System (ADS)

    Skrzypacz, Piotr

    2017-09-01

    The Local Projection Stabilization (LPS) is presented for the linearized Brinkman-Forchheimer-Darcy equation with high Reynolds numbers. The considered equation can be used to model porous medium flows in chemical reactors of packed bed type. The detailed finite element analysis is presented for the case of nonconstant porosity. The enriched variant of LPS is based on the equal order interpolation for the velocity and pressure. The optimal error bounds for the velocity and pressure errors are justified numerically.

  7. Adaptive reconfigurable V-BLAST type equalizer for cognitive MIMO-OFDM radios

    NASA Astrophysics Data System (ADS)

    Ozden, Mehmet Tahir

    2015-12-01

    An adaptive channel shortening equalizer design for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) radio receivers is considered in this presentation. The proposed receiver has desirable features for cognitive and software defined radio implementations. It consists of two sections: MIMO decision feedback equalizer (MIMO-DFE) and adaptive multiple Viterbi detection. In MIMO-DFE section, a complete modified Gram-Schmidt orthogonalization of multichannel input data is accomplished using sequential processing multichannel Givens lattice stages, so that a Vertical Bell Laboratories Layered Space Time (V-BLAST) type MIMO-DFE is realized at the front-end section of the channel shortening equalizer. Matrix operations, a major bottleneck for receiver operations, are accordingly avoided, and only scalar operations are used. A highly modular and regular radio receiver architecture that has a suitable structure for digital signal processing (DSP) chip and field programable gate array (FPGA) implementations, which are important for software defined radio realizations, is achieved. The MIMO-DFE section of the proposed receiver can also be reconfigured for spectrum sensing and positioning functions, which are important tasks for cognitive radio applications. In connection with adaptive multiple Viterbi detection section, a systolic array implementation for each channel is performed so that a receiver architecture with high computational concurrency is attained. The total computational complexity is given in terms of equalizer and desired response filter lengths, alphabet size, and number of antennas. The performance of the proposed receiver is presented for two-channel case by means of mean squared error (MSE) and probability of error evaluations, which are conducted for time-invariant and time-variant channel conditions, orthogonal and nonorthogonal transmissions, and two different modulation schemes.

  8. Smartphone Versus Pen-and-Paper Data Collection of Infant Feeding Practices in Rural China

    PubMed Central

    Zhang, Shuyi; Wu, Qiong; van Velthoven, Michelle HMMT; Chen, Li; Car, Josip; Rudan, Igor; Li, Ye; Scherpbier, Robert W

    2012-01-01

    Background Maternal, Newborn, and Child Health (MNCH) household survey data are collected mainly with pen-and-paper. Smartphone data collection may have advantages over pen-and-paper, but little evidence exists on how they compare. Objective To compare smartphone data collection versus the use of pen-and-paper for infant feeding practices of the MNCH household survey. We compared the two data collection methods for differences in data quality (data recording, data entry, open-ended answers, and interrater reliability), time consumption, costs, interviewers’ perceptions, and problems encountered. Methods We recruited mothers of infants aged 0 to 23 months in four village clinics in Zhaozhou Township, Zhao County, Hebei Province, China. We randomly assigned mothers to a smartphone or a pen-and-paper questionnaire group. A pair of interviewers simultaneously questioned mothers on infant feeding practices, each using the same method (either smartphone or pen-and-paper). Results We enrolled 120 mothers, and all completed the study. Data recording errors were prevented in the smartphone questionnaire. In the 120 pen-and-paper questionnaires (60 mothers), we found 192 data recording errors in 55 questionnaires. There was no significant difference in recording variation between the groups for the questionnaire pairs (P = .32) or variables (P = .45). The smartphone questionnaires were automatically uploaded and no data entry errors occurred. We found that even after double data entry of the pen-and-paper questionnaires, 65.0% (78/120) of the questionnaires did not match and needed to be checked. The mean duration of an interview was 10.22 (SD 2.17) minutes for the smartphone method and 10.83 (SD 2.94) minutes for the pen-and-paper method, which was not significantly different between the methods (P = .19). The mean costs per questionnaire were higher for the smartphone questionnaire (¥143, equal to US $23 at the exchange rate on April 24, 2012) than for the pen-and-paper questionnaire (¥83, equal to US $13). The smartphone method was acceptable to interviewers, and after a pilot test we encountered only minor problems (eg, the system halted for a few seconds or it shut off), which did not result in data loss. Conclusions This is the first study showing that smartphones can be successfully used for household data collection on infant feeding in rural China. Using smartphones for data collection, compared with pen-and-paper, eliminated data recording and entry errors, had similar interrater reliability, and took an equal amount of time per interview. While the costs for the smartphone method were higher than the pen-and-paper method in our small-scale survey, the costs for both methods would be similar for a large-scale survey. Smartphone data collection should be further evaluated for other surveys and on a larger scale to deliver maximum benefits in China and elsewhere. PMID:22989894

  9. Attention failures versus misplaced diligence: separating attention lapses from speed-accuracy trade-offs.

    PubMed

    Seli, Paul; Cheyne, James Allan; Smilek, Daniel

    2012-03-01

    In two studies of a GO-NOGO task assessing sustained attention, we examined the effects of (1) altering speed-accuracy trade-offs through instructions (emphasizing both speed and accuracy or accuracy only) and (2) auditory alerts distributed throughout the task. Instructions emphasizing accuracy reduced errors and changed the distribution of GO trial RTs. Additionally, correlations between errors and increasing RTs produced a U-function; excessively fast and slow RTs accounted for much of the variance of errors. Contrary to previous reports, alerts increased errors and RT variability. The results suggest that (1) standard instructions for sustained attention tasks, emphasizing speed and accuracy equally, produce errors arising from attempts to conform to the misleading requirement for speed, which become conflated with attention-lapse produced errors and (2) auditory alerts have complex, and sometimes deleterious, effects on attention. We argue that instructions emphasizing accuracy provide a more precise assessment of attention lapses in sustained attention tasks. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Gating of neural error signals during motor learning

    PubMed Central

    Kimpo, Rhea R; Rinaldi, Jacob M; Kim, Christina K; Payne, Hannah L; Raymond, Jennifer L

    2014-01-01

    Cerebellar climbing fiber activity encodes performance errors during many motor learning tasks, but the role of these error signals in learning has been controversial. We compared two motor learning paradigms that elicited equally robust putative error signals in the same climbing fibers: learned increases and decreases in the gain of the vestibulo-ocular reflex (VOR). During VOR-increase training, climbing fiber activity on one trial predicted changes in cerebellar output on the next trial, and optogenetic activation of climbing fibers to mimic their encoding of performance errors was sufficient to implant a motor memory. In contrast, during VOR-decrease training, there was no trial-by-trial correlation between climbing fiber activity and changes in cerebellar output, and climbing fiber activation did not induce VOR-decrease learning. Our data suggest that the ability of climbing fibers to induce plasticity can be dynamically gated in vivo, even under conditions where climbing fibers are robustly activated by performance errors. DOI: http://dx.doi.org/10.7554/eLife.02076.001 PMID:24755290

  11. 5 CFR 536.308 - Loss of eligibility for or termination of pay retention.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... is entitled to a rate of basic pay under a covered pay system which is equal to or greater than the... equal or higher rate of basic pay during a temporary promotion or temporary reassignment but will be... determined under § 536.104) of a position in which the employee's rate of basic pay would be equal to or...

  12. Effect of crosstalk on QBER in QKD in urban telecommunication fiber lines

    NASA Astrophysics Data System (ADS)

    Kurochkin, Vladimir L.; Kurochkin, Yuriy V.; Miller, Alexander V.; Sokolov, Alexander S.; Kanapin, Alan A.

    2016-12-01

    Quantum key distribution (QKD) as a technology is being actively implemented into existing urban telecommunication networks. QKD devices are commercially available products. While sending single photons through optical fiber, adjacent fibers, which are used to transfer classical information, might influence the amount of registrations of single photon detectors. This influence is registered, since it directly introduces a higher quantum bit error rate (QBER) into the final key [1-3]. Our report presents the results of the first tests of the QKD device, developed in the Russian Quantum Center. These tests were conducted in Moscow, and are the first of such a device in Russia in urban optical fiber telecommunication networks. The device in question is based on a two-pass auto-compensating optical scheme, which provides stable single photon transfer through urban optical fiber telecommunication networks [4,5]. The single photon detectors ID230 by ID Quantique were used. They operate in free-running mode, and with a quantum effectiveness of 10 % have a dark count 10 Hz. The background signal level in the dedicated fiber was no less than 5.6•10-14 W, which corresponds to 4.4•104 detector clicks per second. The single mode fiber length in Moscow was 30.6 km, the total attenuation equal to 11.7 dB. The sifted quantum key bit rate reached values of 1.9 kbit/s with the QBER level equal to 5.1 %. Methods of lowering the influence of crosstalk on the QBER are considered.

  13. Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems

    NASA Astrophysics Data System (ADS)

    El-Ghandour, Osama M.; Saha, Debabrata

    1991-05-01

    A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.

  14. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  15. Executive Council lists and general practitioner files

    PubMed Central

    Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.

    1974-01-01

    An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588

  16. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system

    PubMed Central

    Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.

    2015-01-01

    Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702

  17. Improving the power of long term rodent carcinogenicity bioassays by adjusting the experimental design.

    PubMed

    Jackson, Matthew T

    2015-07-01

    Since long term rodent carcinogenicity studies are used to test a very large number of potential tumor endpoints, finding a balance between the control of Type 1 and Type 2 error is challenging. As a result, these studies can suffer from very low power to detect effects of regulatory significance. In the present paper, a new design is proposed in order address this problem. This design is a simple modification of the existing standard designs and uses the same number of animals. Where it differs from the currently used designs is that it uses just three treatment groups rather than four, with the animals concentrated in the control and high dose groups, rather than being equally distributed among the groups. This new design is tested, in a pair of simulation studies over a range of scenarios, against two currently used designs, and against a maximally powerful two group design. It consistently performs at levels close to the optimal design, and except in the case of relatively modest effects and very rare tumors, is found to increase power by 10%-20% over the current designs while maintaining or reducing the Type 1 error rate. Published by Elsevier Inc.

  18. Performance Comparison between CDTD and STTD for DS-CDMA/MMSE-FDE with Frequency-Domain ICI Cancellation

    NASA Astrophysics Data System (ADS)

    Takeda, Kazuaki; Kojima, Yohei; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. However, the residual inter-chip interference (ICI) is produced after MMSE-FDE and this degrades the BER performance. Recently, we showed that frequency-domain ICI cancellation can bring the BER performance close to the theoretical lower bound. To further improve the BER performance, transmit antenna diversity technique is effective. Cyclic delay transmit diversity (CDTD) can increase the number of equivalent paths and hence achieve a large frequency diversity gain. Space-time transmit diversity (STTD) can obtain antenna diversity gain due to the space-time coding and achieve a better BER performance than CDTD. Objective of this paper is to show that the BER performance degradation of CDTD is mainly due to the residual ICI and that the introduction of ICI cancellation gives almost the same BER performance as STTD. This study provides a very important result that CDTD has a great advantage of providing a higher throughput than STTD. This is confirmed by computer simulation. The computer simulation results show that CDTD can achieve higher throughput than STTD when ICI cancellation is introduced.

  19. Two-Level Scheduling for Video Transmission over Downlink OFDMA Networks

    PubMed Central

    Tham, Mau-Luen

    2016-01-01

    This paper presents a two-level scheduling scheme for video transmission over downlink orthogonal frequency-division multiple access (OFDMA) networks. It aims to maximize the aggregate quality of the video users subject to the playback delay and resource constraints, by exploiting the multiuser diversity and the video characteristics. The upper level schedules the transmission of video packets among multiple users based on an overall target bit-error-rate (BER), the importance level of packet and resource consumption efficiency factor. Instead, the lower level renders unequal error protection (UEP) in terms of target BER among the scheduled packets by solving a weighted sum distortion minimization problem, where each user weight reflects the total importance level of the packets that has been scheduled for that user. Frequency-selective power is then water-filled over all the assigned subcarriers in order to leverage the potential channel coding gain. Realistic simulation results demonstrate that the proposed scheme significantly outperforms the state-of-the-art scheduling scheme by up to 6.8 dB in terms of peak-signal-to-noise-ratio (PSNR). Further test evaluates the suitability of equal power allocation which is the common assumption in the literature. PMID:26906398

  20. Goldmann tonometer error correcting prism: clinical evaluation.

    PubMed

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko T; Schwiegerling, Jim; Levine, Jason; Kew, Corin

    2017-01-01

    Clinically evaluate a modified applanating surface Goldmann tonometer prism designed to substantially negate errors due to patient variability in biomechanics. A modified Goldmann prism with a correcting applanation tonometry surface (CATS) was mathematically optimized to minimize the intraocular pressure (IOP) measurement error due to patient variability in corneal thickness, stiffness, curvature, and tear film adhesion force. A comparative clinical study of 109 eyes measured IOP with CATS and Goldmann prisms. The IOP measurement differences between the CATS and Goldmann prisms were correlated to corneal thickness, hysteresis, and curvature. The CATS tonometer prism in correcting for Goldmann central corneal thickness (CCT) error demonstrated a reduction to <±2 mmHg in 97% of a standard CCT population. This compares to only 54% with CCT error <±2 mmHg using the Goldmann prism. Equal reductions of ~50% in errors due to corneal rigidity and curvature were also demonstrated. The results validate the CATS prism's improved accuracy and expected reduced sensitivity to Goldmann errors without IOP bias as predicted by mathematical modeling. The CATS replacement for the Goldmann prism does not change Goldmann measurement technique or interpretation.

  1. Parkinson Disease Detection from Speech Articulation Neuromechanics.

    PubMed

    Gómez-Vilda, Pedro; Mekyska, Jiri; Ferrández, José M; Palacios-Alonso, Daniel; Gómez-Rodellar, Andrés; Rodellar-Biarge, Victoria; Galaz, Zoltan; Smekal, Zdenek; Eliasova, Ilona; Kostalova, Milena; Rektorova, Irena

    2017-01-01

    Aim: The research described is intended to give a description of articulation dynamics as a correlate of the kinematic behavior of the jaw-tongue biomechanical system, encoded as a probability distribution of an absolute joint velocity. This distribution may be used in detecting and grading speech from patients affected by neurodegenerative illnesses, as Parkinson Disease. Hypothesis: The work hypothesis is that the probability density function of the absolute joint velocity includes information on the stability of phonation when applied to sustained vowels, as well as on fluency if applied to connected speech. Methods: A dataset of sustained vowels recorded from Parkinson Disease patients is contrasted with similar recordings from normative subjects. The probability distribution of the absolute kinematic velocity of the jaw-tongue system is extracted from each utterance. A Random Least Squares Feed-Forward Network (RLSFN) has been used as a binary classifier working on the pathological and normative datasets in a leave-one-out strategy. Monte Carlo simulations have been conducted to estimate the influence of the stochastic nature of the classifier. Two datasets for each gender were tested (males and females) including 26 normative and 53 pathological subjects in the male set, and 25 normative and 38 pathological in the female set. Results: Male and female data subsets were tested in single runs, yielding equal error rates under 0.6% (Accuracy over 99.4%). Due to the stochastic nature of each experiment, Monte Carlo runs were conducted to test the reliability of the methodology. The average detection results after 200 Montecarlo runs of a 200 hyperplane hidden layer RLSFN are given in terms of Sensitivity (males: 0.9946, females: 0.9942), Specificity (males: 0.9944, females: 0.9941) and Accuracy (males: 0.9945, females: 0.9942). The area under the ROC curve is 0.9947 (males) and 0.9945 (females). The equal error rate is 0.0054 (males) and 0.0057 (females). Conclusions: The proposed methodology avails that the use of highly normalized descriptors as the probability distribution of kinematic variables of vowel articulation stability, which has some interesting properties in terms of information theory, boosts the potential of simple yet powerful classifiers in producing quite acceptable detection results in Parkinson Disease.

  2. High mitochondrial mutation rates estimated from deep-rooting Costa Rican pedigrees

    PubMed Central

    Madrigal, Lorena; Melendez-Obando, Mauricio; Villegas-Palma, Ramon; Barrantes, Ramiro; Raventos, Henrieta; Pereira, Reynaldo; Luiselli, Donata; Pettener, Davide; Barbujani, Guido

    2012-01-01

    Estimates of mutation rates for the noncoding hypervariable Region I (HVR-I) of mitochondrial DNA (mtDNA) vary widely, depending on whether they are inferred from phylogenies (assuming that molecular evolution is clock-like) or directly from pedigrees. All pedigree-based studies so far were conducted on populations of European origin. In this paper we analyzed 19 deep-rooting pedigrees in a population of mixed origin in Costa Rica. We calculated two estimates of the HVR-I mutation rate, one considering all apparent mutations, and one disregarding changes at sites known to be mutational hot spots and eliminating genealogy branches which might be suspected to include errors, or unrecognized adoptions along the female lines. At the end of this procedure, we still observed a mutation rate equal to 1.24 × 10−6, per site per year, i.e., at least three-fold as high as estimates derived from phylogenies. Our results confirm that mutation rates observed in pedigrees are much higher than estimated assuming a neutral model of long-term HVRI evolution. We argue that, until the cause of these discrepancies will be fully understood, both lower estimates (i.e., those derived from phylogenetic comparisons) and higher, direct estimates such as those obtained in this study, should be considered when modeling evolutionary and demographic processes. PMID:22460349

  3. Soil pH Errors Propagation from Measurements to Spatial Predictions - Cost Benefit Analysis and Risk Assessment Implications for Practitioners and Modelers

    NASA Astrophysics Data System (ADS)

    Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.

    2017-12-01

    The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.

  4. On Several Fundamental Problems of Optimization, Estimation, and Scheduling in Wireless Communications

    NASA Astrophysics Data System (ADS)

    Gao, Qian

    For both the conventional radio frequency and the comparably recent optical wireless communication systems, extensive effort from the academia had been made in improving the network spectrum efficiency and/or reducing the error rate. To achieve these goals, many fundamental challenges such as power efficient constellation design, nonlinear distortion mitigation, channel training design, network scheduling and etc. need to be properly addressed. In this dissertation, novel schemes are proposed accordingly to deal with specific problems falling in category of these challenges. Rigorous proofs and analyses are provided for each of our work to make a fair comparison with the corresponding peer works to clearly demonstrate the advantages. The first part of this dissertation considers a multi-carrier optical wireless system employing intensity modulation (IM) and direct detection (DD). A block-wise constellation design is presented, which treats the DC-bias that conventionally used solely for biasing purpose as an information basis. Our scheme, we term it MSM-JDCM, takes advantage of the compactness of sphere packing in a higher dimensional space, and in turn power efficient constellations are obtained by solving an advanced convex optimization problem. Besides the significant power gains, the MSM-JDCM has many other merits such as being capable of mitigating nonlinear distortion by including a peak-to-power ratio (PAPR) constraint, minimizing inter-symbol-interference (ISI) caused by frequency-selective fading with a novel precoder designed and embedded, and further reducing the bit-error-rate (BER) by combining with an optimized labeling scheme. The second part addresses several optimization problems in a multi-color visible light communication system, including power efficient constellation design, joint pre-equalizer and constellation design, and modeling of different structured channels with cross-talks. Our novel constellation design scheme, termed CSK-Advanced, is compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full-duplex mode, where the packets previously collected from users are transmitted to the destination while new packets are being collected from users. A novel expression of throughput is first derived and then used to develop a scheduling algorithm to maximize the throughput. Our full-duplex scheduling is compared with a half-duplex scheduling, random access, and time division multiple access (TDMA), and simulation results illustrate its superiority. Throughput gains due to employment of both MIMO and CDMA are observed.

  5. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    PubMed

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  6. 40-Gb/s PAM4 with low-complexity equalizers for next-generation PON systems

    NASA Astrophysics Data System (ADS)

    Tang, Xizi; Zhou, Ji; Guo, Mengqi; Qi, Jia; Hu, Fan; Qiao, Yaojun; Lu, Yueming

    2018-01-01

    In this paper, we demonstrate 40-Gb/s four-level pulse amplitude modulation (PAM4) transmission with 10 GHz devices and low-complexity equalizers for next-generation passive optical network (PON) systems. Simple feed-forward equalizer (FFE) and decision feedback equalizer (DFE) enable 20 km fiber transmission while high-complexity Volterra algorithm in combination with FFE and DFE can extend the transmission distance to 40 km. A simplified Volterra algorithm is proposed for reducing computational complexity. Simulation results show that the simplified Volterra algorithm reduces up to ∼75% computational complexity at a relatively low cost of only 0.4 dB power budget. At a forward error correction (FEC) threshold of 10-3 , we achieve 31.2 dB and 30.8 dB power budget over 40 km fiber transmission using traditional FFE-DFE-Volterra and our simplified FFE-DFE-Volterra, respectively.

  7. Special cascade LMS equalization scheme suitable for 60-GHz RoF transmission system.

    PubMed

    Liu, Siming; Shen, Guansheng; Kou, Yanbin; Tian, Huiping

    2016-05-16

    We design a specific cascade least mean square (LMS) equalizer and to the best of our knowledge, it is the first time this kind of equalizer has been employed for 60-GHz millimeter-wave (mm-wave) radio over fiber (RoF) system. The proposed cascade LMS equalizer consists of two sub-equalizers which are designated for optical and wireless channel compensations, respectively. We control the linear and nonlinear factors originated from optical link and wireless link separately. The cascade equalization scheme can keep the nonlinear distortions of the RoF system in a low degree. We theoretically and experimentally investigate the parameters of the two sub-equalizers to reach their best performances. The experiment results show that the cascade equalization scheme has a faster convergence speed. It needs a training sequence with a length of 10000 to reach its stable status, which is only half as long as the traditional LMS equalizer needs. With the utility of a proposed equalizer, the 60-GHz RoF system can successfully transmit 5-Gbps BPSK signal over 10-km fiber and 1.2-m wireless link under forward error correction (FEC) limit 10-3. An improvement of 4dBm and 1dBm in power sensitivity at BER 10-3 over traditional LMS equalizer can be observed when the signals are transmitted through Back-to-Back (BTB) and 10-km fiber 1.2-m wireless links, respectively.

  8. Migration of Asteroidal Dust

    NASA Technical Reports Server (NTRS)

    Ipatov, S. I.; Mather, J. C.

    2003-01-01

    Using the Bulirsh Stoer method of integration, we investigated the migration of dust particles under the gravitational influence of all planets, radiation pressure, Poynting Robertson drag and solar wind drag for equal to 0.01, 0.05, 0.1, 0.25, and 0.4. For silicate particles such values of correspond to diameters equal to about 40, 9, 4, 2, and 1 microns, respectively [1]. The relative error per integration step was taken to be less than 10sup-8. Initial orbits of the particles were close to the orbits of the first numbered mainbelt asteroids.

  9. Positive dwell time algorithm with minimum equal extra material removal in deterministic optical surfacing technology.

    PubMed

    Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun

    2017-11-10

    In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.

  10. Flow Rates Measurement and Uncertainty Analysis in Multiple-Zone Water-Injection Wells from Fluid Temperature Profiles

    PubMed Central

    Reges, José E. O.; Salazar, A. O.; Maitelli, Carla W. S. P.; Carvalho, Lucas G.; Britto, Ursula J. B.

    2016-01-01

    This work is a contribution to the development of flow sensors in the oil and gas industry. It presents a methodology to measure the flow rates into multiple-zone water-injection wells from fluid temperature profiles and estimate the measurement uncertainty. First, a method to iteratively calculate the zonal flow rates using the Ramey (exponential) model was described. Next, this model was linearized to perform an uncertainty analysis. Then, a computer program to calculate the injected flow rates from experimental temperature profiles was developed. In the experimental part, a fluid temperature profile from a dual-zone water-injection well located in the Northeast Brazilian region was collected. Thus, calculated and measured flow rates were compared. The results proved that linearization error is negligible for practical purposes and the relative uncertainty increases as the flow rate decreases. The calculated values from both the Ramey and linear models were very close to the measured flow rates, presenting a difference of only 4.58 m³/d and 2.38 m³/d, respectively. Finally, the measurement uncertainties from the Ramey and linear models were equal to 1.22% and 1.40% (for injection zone 1); 10.47% and 9.88% (for injection zone 2). Therefore, the methodology was successfully validated and all objectives of this work were achieved. PMID:27420068

  11. Decision-Making under Risk of Loss in Children

    PubMed Central

    Steelandt, Sophie; Broihanne, Marie-Hélène; Romain, Amélie; Thierry, Bernard; Dufour, Valérie

    2013-01-01

    In human adults, judgment errors are known to often lead to irrational decision-making in risky contexts. While these errors can affect the accuracy of profit evaluation, they may have once enhanced survival in dangerous contexts following a “better be safe than sorry” rule of thumb. Such a rule can be critical for children, and it could develop early on. Here, we investigated the rationality of choices and the possible occurrence of judgment errors in children aged 3 to 9 years when exposed to a risky trade. Children were allocated with a piece of cookie that they could either keep or risk in exchange of the content of one cup among 6, visible in front of them. In the cups, cookies could be of larger, equal or smaller sizes than the initial allocation. Chances of losing or winning were manipulated by presenting different combinations of cookie sizes in the cups (for example 3 large, 2 equal and 1 small cookie). We investigated the rationality of children's response using the theoretical models of Expected Utility Theory (EUT) and Cumulative Prospect Theory. Children aged 3 to 4 years old were unable to discriminate the profitability of exchanging in the different combinations. From 5 years, children were better at maximizing their benefit in each combination, their decisions were negatively induced by the probability of losing, and they exhibited a framing effect, a judgment error found in adults. Confronting data to the EUT indicated that children aged over 5 were risk-seekers but also revealed inconsistencies in their choices. According to a complementary model, the Cumulative Prospect Theory (CPT), they exhibited loss aversion, a pattern also found in adults. These findings confirm that adult-like judgment errors occur in children, which suggests that they possess a survival value. PMID:23349682

  12. Decision-making under risk of loss in children.

    PubMed

    Steelandt, Sophie; Broihanne, Marie-Hélène; Romain, Amélie; Thierry, Bernard; Dufour, Valérie

    2013-01-01

    In human adults, judgment errors are known to often lead to irrational decision-making in risky contexts. While these errors can affect the accuracy of profit evaluation, they may have once enhanced survival in dangerous contexts following a "better be safe than sorry" rule of thumb. Such a rule can be critical for children, and it could develop early on. Here, we investigated the rationality of choices and the possible occurrence of judgment errors in children aged 3 to 9 years when exposed to a risky trade. Children were allocated with a piece of cookie that they could either keep or risk in exchange of the content of one cup among 6, visible in front of them. In the cups, cookies could be of larger, equal or smaller sizes than the initial allocation. Chances of losing or winning were manipulated by presenting different combinations of cookie sizes in the cups (for example 3 large, 2 equal and 1 small cookie). We investigated the rationality of children's response using the theoretical models of Expected Utility Theory (EUT) and Cumulative Prospect Theory. Children aged 3 to 4 years old were unable to discriminate the profitability of exchanging in the different combinations. From 5 years, children were better at maximizing their benefit in each combination, their decisions were negatively induced by the probability of losing, and they exhibited a framing effect, a judgment error found in adults. Confronting data to the EUT indicated that children aged over 5 were risk-seekers but also revealed inconsistencies in their choices. According to a complementary model, the Cumulative Prospect Theory (CPT), they exhibited loss aversion, a pattern also found in adults. These findings confirm that adult-like judgment errors occur in children, which suggests that they possess a survival value.

  13. Effect of Oxygen-Supply Rates on Growth of Escherichia coli

    PubMed Central

    McDaniel, L. E.; Bailey, E. G.; Zimmerli, A.

    1965-01-01

    The effect of oxygen-supply rates on bacterial growth was studied in commercially available unbaffled and baffled flasks with the use of Escherichia coli in a synthetic medium as a test system. The amount of growth obtained depended on the oxygen-supply rate. Based on oxygen-absorption rates (OAR) measured by the rate of sulfite oxidation, equal OAR values in different types of flasks did not give equal amounts of growth. However, growth was essentially equal at the equal sulfite-oxidation rates when these were determined in the presence of killed whole cultures. Specific growth rates were reduced only at oxygen-supply rates much lower than those at which the total amount of growth was reduced. For the physical set-up used in this work and with the biological system employed, Bellco 598 flasks and flasks fitted with Biotech stainless-steel baffles gave satisfactory results at workable broth volumes; unbaffled and Bellco 600 flasks did not. PMID:14264837

  14. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  15. Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.

    ERIC Educational Resources Information Center

    Anderson, Richard C.; And Others

    A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…

  16. Estimating genotype error rates from high-coverage next-generation sequence data.

    PubMed

    Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil

    2014-11-01

    Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods. © 2014 Wall et al.; Published by Cold Spring Harbor Laboratory Press.

  17. Quantum-state comparison and discrimination

    NASA Astrophysics Data System (ADS)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2018-05-01

    We investigate the performance of discrimination strategy in the comparison task of known quantum states. In the discrimination strategy, one infers whether or not two quantum systems are in the same state on the basis of the outcomes of separate discrimination measurements on each system. In some cases with more than two possible states, the optimal strategy in minimum-error comparison is that one should infer the two systems are in different states without any measurement, implying that the discrimination strategy performs worse than the trivial "no-measurement" strategy. We present a sufficient condition for this phenomenon to happen. For two pure states with equal prior probabilities, we determine the optimal comparison success probability with an error margin, which interpolates the minimum-error and unambiguous comparison. We find that the discrimination strategy is not optimal except for the minimum-error case.

  18. An Analysis of U.S. Civil Rotorcraft Accidents by Cost and Injury (1990-1996)

    NASA Technical Reports Server (NTRS)

    Iseler, Laura; DeMaio, Joe; Rutkowski, Michael (Technical Monitor)

    2002-01-01

    A study of rotorcraft accidents was conducted to identify safety issues and research areas that might lead to a reduction in rotorcraft accidents and fatalities. The primary source of data was summaries of National Transportation Safety Board (NTSB) accident reports. From 1990 to 1996, the NTSB documented 1396 civil rotorcraft accidents in the United States in which 491 people were killed. The rotorcraft data were compared to airline and general aviation data to determine the relative safety of rotorcraft compared to other segments of the aviation industry. In depth analysis of the rotorcraft data addressed demographics, mission, and operational factors. Rotorcraft were found to have an accident rate about ten times that of commercial airliners and about the same as that of general aviation. The likelihood that an accident would be fatal was about equal for all three classes of operation. The most dramatic division in rotorcraft accidents is between flights flown by private pilots versus professional pilots. Private pilots, flying low cost aircraft in benign environments, have accidents that are due, in large part, to their own errors. Professional pilots, in contrast, are more likely to have accidents that are a result of exacting missions or use of specialized equipment. For both groups judgement error is more likely to lead to a fatal accident than are other types of causes. Several approaches to improving the rotorcraft accident rate are recommended. These mostly address improvement in the training of new pilots and improving the safety awareness of private pilots.

  19. Hepatic function imaging using dynamic Gd-EOB-DTPA enhanced MRI and pharmacokinetic modeling.

    PubMed

    Ning, Jia; Yang, Zhiying; Xie, Sheng; Sun, Yongliang; Yuan, Chun; Chen, Huijun

    2017-10-01

    To determine whether pharmacokinetic modeling parameters with different output assumptions of dynamic contrast-enhanced MRI (DCE-MRI) using Gd-EOB-DTPA correlate with serum-based liver function tests, and compare the goodness of fit of the different output assumptions. A 6-min DCE-MRI protocol was performed in 38 patients. Four dual-input two-compartment models with different output assumptions and a published one-compartment model were used to calculate hepatic function parameters. The Akaike information criterion fitting error was used to evaluate the goodness of fit. Imaging-based hepatic function parameters were compared with blood chemistry using correlation with multiple comparison correction. The dual-input two-compartment model assuming venous flow equals arterial flow plus portal venous flow and no bile duct output better described the liver tissue enhancement with low fitting error and high correlation with blood chemistry. The relative uptake rate Kir derived from this model was found to be significantly correlated with direct bilirubin (r = -0.52, P = 0.015), prealbumin concentration (r = 0.58, P = 0.015), and prothrombin time (r = -0.51, P = 0.026). It is feasible to evaluate hepatic function by proper output assumptions. The relative uptake rate has the potential to serve as a biomarker of function. Magn Reson Med 78:1488-1495, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  20. Performance of electrolyte measurements assessed by a trueness verification program.

    PubMed

    Ge, Menglei; Zhao, Haijian; Yan, Ying; Zhang, Tianjiao; Zeng, Jie; Zhou, Weiyan; Wang, Yufei; Meng, Qinghui; Zhang, Chuanbao

    2016-08-01

    In this study, we analyzed frozen sera with known commutabilities for standardization of serum electrolyte measurements in China. Fresh frozen sera were sent to 187 clinical laboratories in China for measurement of four electrolytes (sodium, potassium, calcium, and magnesium). Target values were assigned by two reference laboratories. Precision (CV), trueness (bias), and accuracy [total error (TEa)] were used to evaluate measurement performance, and the tolerance limit derived from the biological variation was used as the evaluation criterion. About half of the laboratories used a homogeneous system (same manufacturer for instrument, reagent and calibrator) for calcium and magnesium measurement, and more than 80% of laboratories used a homogeneous system for sodium and potassium measurement. More laboratories met the tolerance limit of imprecision (coefficient of variation [CVa]) than the tolerance limits of trueness (biasa) and TEa. For sodium, calcium, and magnesium, the minimal performance criterion derived from biological variation was used, and the pass rates for total error were approximately equal to the bias (<50%). For potassium, the pass rates for CV and TE were more than 90%. Compared with the non homogeneous system, the homogeneous system was superior for all three quality specifications. The use of commutable proficiency testing/external quality assessment (PT/EQA) samples with values assigned by reference methods can monitor performance and provide reliable data for improving the performance of laboratory electrolyte measurement. The homogeneous systems were superior to the non homogeneous systems, whereas accuracy of assigned values of calibrators and assay stability remained challenges.

  1. 50 Gb/s NRZ and 4-PAM data transmission over OM5 fiber in the SWDM wavelength range

    NASA Astrophysics Data System (ADS)

    Agustin, M.; Ledentsov, N.; Kropp, J.-R.; Shchukin, V. A.; Kalosha, V. P.; Chi, K. L.; Khan, Z.; Shi, J. W.; Ledentsov, N. N.

    2018-02-01

    The development of advanced OM5 wideband multimode fiber (WBMMF) allowing high modal bandwidth in the spectral range 840-950 nm motivates research in vertical-cavity-surface-emitting-lasers (VCSELs) at wavelengths beyond the previously accepted for short reach communications. Thus, short wavelength division multiplexing (SWDM) solutions can be implemented as a strategy to satisfy the increasing demand of data rate in datacenter environments. As an alternative solution to 850 nm parallel links, four wavelengths with 30 nm separation between 850 nm and 940 nm can be multiplexed on a single OM5-MMF, so the number of fibers deployed is reduced by a factor of four. In this paper high speed transmission is studied for VCSELs in the 850 nm - 950 nm range. The devices had a modulating bandwidth of 26-28 GHz. 50 Gb/s non-return-to-zero (NRZ) operation is demonstrated at each wavelength without preemphasis and equalization, with bit-error-rate (BER) below 7% forward error correction (FEC) threshold. Furthermore, the use of single-mode VCSELs (SM-VCSELs) as a way to mitigate the effects of chromatic dispersions in order to extend the maximum transmission distance over OM5 is explored. Analysis of loss as a function of wavelength in OM5 fiber is also performed. Significant decrease is observed, from 2.2 dB/km to less than 1.7 dB/km at 910 nm wavelength of the VCSEL.

  2. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  3. Computer calculated dose in paediatric prescribing.

    PubMed

    Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C

    2005-01-01

    Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.

  4. Angular rate optimal design for the rotary strapdown inertial navigation system.

    PubMed

    Yu, Fei; Sun, Qian

    2014-04-22

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.

  5. Comparison of Meropenem MICs and Susceptibilities for Carbapenemase-Producing Klebsiella pneumoniae Isolates by Various Testing Methods▿

    PubMed Central

    Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.

    2010-01-01

    We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603

  6. The effectiveness of the error reporting promoting program on the nursing error incidence rate in Korean operating rooms.

    PubMed

    Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung

    2007-03-01

    The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.

  7. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection

    PubMed Central

    Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-01-01

    Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474

  8. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.

    PubMed

    Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-08-18

    The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.

  9. Chemistry of groundwater discharge inferred from longitudinal river sampling

    NASA Astrophysics Data System (ADS)

    Batlle-Aguilar, J.; Harrington, G. A.; Leblanc, M.; Welch, C.; Cook, P. G.

    2014-02-01

    We present an approach for identifying groundwater discharge chemistry and quantifying spatially distributed groundwater discharge into rivers based on longitudinal synoptic sampling and flow gauging of a river. The method is demonstrated using a 450 km reach of a tropical river in Australia. Results obtained from sampling for environmental tracers, major ions, and selected trace element chemistry were used to calibrate a steady state one-dimensional advective transport model of tracer distribution along the river. The model closely reproduced river discharge and environmental tracer and chemistry composition along the study length. It provided a detailed longitudinal profile of groundwater inflow chemistry and discharge rates, revealing that regional fractured mudstones in the central part of the catchment contributed up to 40% of all groundwater discharge. Detailed analysis of model calibration errors and modeled/measured groundwater ion ratios elucidated that groundwater discharging in the top of the catchment is a mixture of local groundwater and bank storage return flow, making the method potentially useful to differentiate between local and regional sourced groundwater discharge. As the error in tracer concentration induced by a flow event applies equally to any conservative tracer, we show that major ion ratios can still be resolved with minimal error when river samples are collected during transient flow conditions. The ability of the method to infer groundwater inflow chemistry from longitudinal river sampling is particularly attractive in remote areas where access to groundwater is limited or not possible, and for identification of actual fluxes of salts and/or specific contaminant sources.

  10. Custom map projections for regional groundwater models

    USGS Publications Warehouse

    Kuniansky, Eve L.

    2017-01-01

    For regional groundwater flow models (areas greater than 100,000 km2), improper choice of map projection parameters can result in model error for boundary conditions dependent on area (recharge or evapotranspiration simulated by application of a rate using cell area from model discretization) and length (rivers simulated with head-dependent flux boundary). Smaller model areas can use local map coordinates, such as State Plane (United States) or Universal Transverse Mercator (correct zone) without introducing large errors. Map projections vary in order to preserve one or more of the following properties: area, shape, distance (length), or direction. Numerous map projections are developed for different purposes as all four properties cannot be preserved simultaneously. Preservation of area and length are most critical for groundwater models. The Albers equal-area conic projection with custom standard parallels, selected by dividing the length north to south by 6 and selecting standard parallels 1/6th above or below the southern and northern extent, preserves both area and length for continental areas in mid latitudes oriented east-west. Custom map projection parameters can also minimize area and length error in non-ideal projections. Additionally, one must also use consistent vertical and horizontal datums for all geographic data. The generalized polygon for the Floridan aquifer system study area (306,247.59 km2) is used to provide quantitative examples of the effect of map projections on length and area with different projections and parameter choices. Use of improper map projection is one model construction problem easily avoided.

  11. Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.

    PubMed

    Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen

    2016-07-27

    Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.

  12. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  13. Amorphous Silicon p-i-n Structure Acting as Light and Temperature Sensor

    PubMed Central

    de Cesare, Giampiero; Nascetti, Augusto; Caputo, Domenico

    2015-01-01

    In this work, we propose a multi-parametric sensor able to measure both temperature and radiation intensity, suitable to increase the level of integration and miniaturization in Lab-on-Chip applications. The device is based on amorphous silicon p-doped/intrinsic/n-doped thin film junction. The device is first characterized as radiation and temperature sensor independently. We found a maximum value of responsivity equal to 350 mA/W at 510 nm and temperature sensitivity equal to 3.2 mV/K. We then investigated the effects of the temperature variation on light intensity measurement and of the light intensity variation on the accuracy of the temperature measurement. We found that the temperature variation induces an error lower than 0.55 pW/K in the light intensity measurement at 550 nm when the diode is biased in short circuit condition, while an error below 1 K/µW results in the temperature measurement when a forward bias current higher than 25 µA/cm2 is applied. PMID:26016913

  14. Compliance Monitoring of Juvenile Subyearling Chinook Salmon Survival and Passage at The Dalles Dam, Summer 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Gary E.; Carlson, Thomas J.; Skalski, John R.

    2010-12-21

    The purpose of this compliance study was to estimate dam passage survival of subyearling Chinook salmon smolts at The Dalles Dam during summer 2010. Under the 2008 Federal Columbia River Power System (FCRPS) Biological Opinion (BiOp), dam passage survival should be greater than or equal to 0.93 and estimated with a standard error (SE) less than or equal 0.015. The study also estimated smolt passage survival from the forebay 2 km upstream of the dam to the tailrace 2 km below the dam The forebay-to-tailrace survival estimate satisfies the “BRZ-to-BRZ” survival estimate called for in the Fish Accords. , asmore » well as the forebay residence time, tailrace egress time, and spill passage efficiency, as required in the Columbia Basin Fish Accords. The estimate of dam survival for subyearling Chinook salmon at The Dalles in 2010 was 0.9404 with an associated standard error of 0.0091.« less

  15. Flexible wavelength de-multiplexer for elastic optical networking.

    PubMed

    Zhou, Rui; Gutierrez Pascual, M Deseada; Anandarajah, Prince M; Shao, Tong; Smyth, Frank; Barry, Liam P

    2016-05-15

    We report an injection locked flexible wavelength de-multiplexer (de-mux) that shows 24-h frequency stability of 1 kHz for optical comb-based elastic optical networking applications. We demonstrate 50 GHz, 87.5 GHz equal spacing and 6.25G-25G-50 GHz, 75G-50G-100 GHz unequal spacing for the de-multiplexer outputs. We also implement an unequally spaced (75G-50G-100 GHz), mixed symbol rate (12.5 GBaud and 40 GBaud) and modulation format (polarization division multiplexed quadrature phase shift keying and on-off keying) wavelength division multiplexed transmission system using the de-multiplexer outputs. The results show 0.6 dB receiver sensitivity penalty, at 7% hard decision forward error correction coding limit, of the 100 km transmitted de-mux outputs when compared to comb source seeding laser back-to-back.

  16. High-density near-field optical disc recording using phase change media and polycarbonate substrate

    NASA Astrophysics Data System (ADS)

    Shinoda, Masataka; Saito, Kimihiro; Ishimoto, Tsutomu; Kondo, Takao; Nakaoki, Ariyoshi; Furuki, Motohiro; Takeda, Minoru; Akiyama, Yuji; Shimouma, Takashi; Yamamoto, Masanobu

    2004-09-01

    We developed a high density near field optical recording disc system with a solid immersion lens and two laser sources. In order to realize the near field optical recording, we used a phase change recording media and a molded polycarbonate substrate. The near field optical pick-up consists of a solid immersion lens with numerical aperture of 1.84. The clear eye pattern of 90.2 GB capacity (160nm track pitch and 62 nm per bit) was observed. The jitter using a limit equalizer was 10.0 % without cross-talk. The bit error rate using an adaptive PRML with 8 taps was 3.7e-6 without cross-talk. We confirmed that the near field optical disc system is a promising technology for a next generation high density optical disc system.

  17. SOPRA: Scaffolding algorithm for paired reads via statistical optimization.

    PubMed

    Dayarian, Adel; Michael, Todd P; Sengupta, Anirvan M

    2010-06-24

    High throughput sequencing (HTS) platforms produce gigabases of short read (<100 bp) data per run. While these short reads are adequate for resequencing applications, de novo assembly of moderate size genomes from such reads remains a significant challenge. These limitations could be partially overcome by utilizing mate pair technology, which provides pairs of short reads separated by a known distance along the genome. We have developed SOPRA, a tool designed to exploit the mate pair/paired-end information for assembly of short reads. The main focus of the algorithm is selecting a sufficiently large subset of simultaneously satisfiable mate pair constraints to achieve a balance between the size and the quality of the output scaffolds. Scaffold assembly is presented as an optimization problem for variables associated with vertices and with edges of the contig connectivity graph. Vertices of this graph are individual contigs with edges drawn between contigs connected by mate pairs. Similar graph problems have been invoked in the context of shotgun sequencing and scaffold building for previous generation of sequencing projects. However, given the error-prone nature of HTS data and the fundamental limitations from the shortness of the reads, the ad hoc greedy algorithms used in the earlier studies are likely to lead to poor quality results in the current context. SOPRA circumvents this problem by treating all the constraints on equal footing for solving the optimization problem, the solution itself indicating the problematic constraints (chimeric/repetitive contigs, etc.) to be removed. The process of solving and removing of constraints is iterated till one reaches a core set of consistent constraints. For SOLiD sequencer data, SOPRA uses a dynamic programming approach to robustly translate the color-space assembly to base-space. For assessing the quality of an assembly, we report the no-match/mismatch error rate as well as the rates of various rearrangement errors. Applying SOPRA to real data from bacterial genomes, we were able to assemble contigs into scaffolds of significant length (N50 up to 200 Kb) with very few errors introduced in the process. In general, the methodology presented here will allow better scaffold assemblies of any type of mate pair sequencing data.

  18. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  19. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    PubMed

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.

  20. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.

  1. Coagulation Function of Stored Whole Blood is Preserved for 14 Days in Austere Conditions: A ROTEM Feasibility Study During a Norwegian Antipiracy Mission and Comparison to Equal Ratio Reconstituted Blood

    DTIC Science & Technology

    2015-06-24

    mechanical piston movements measured by the ROTEM device. Error messages were recorded in 4 (1.5%) of 267 tests. CWB yielded repro- ducible ROTEM results... piston movement analysis, error message frequency, and result variability and (2) compare the clotting properties of cold-stored WB obtained from a walking...signed the selection form, which tracked TTD screening and blood grouping results. That same form doubled as a transfusion form and was used to

  2. Lossless Brownian Information Engine

    NASA Astrophysics Data System (ADS)

    Paneru, Govind; Lee, Dong Yun; Tlusty, Tsvi; Pak, Hyuk Kyu

    2018-01-01

    We report on a lossless information engine that converts nearly all available information from an error-free feedback protocol into mechanical work. Combining high-precision detection at a resolution of 1 nm with ultrafast feedback control, the engine is tuned to extract the maximum work from information on the position of a Brownian particle. We show that the work produced by the engine achieves a bound set by a generalized second law of thermodynamics, demonstrating for the first time the sharpness of this bound. We validate a generalized Jarzynski equality for error-free feedback-controlled information engines.

  3. Lossless Brownian Information Engine.

    PubMed

    Paneru, Govind; Lee, Dong Yun; Tlusty, Tsvi; Pak, Hyuk Kyu

    2018-01-12

    We report on a lossless information engine that converts nearly all available information from an error-free feedback protocol into mechanical work. Combining high-precision detection at a resolution of 1 nm with ultrafast feedback control, the engine is tuned to extract the maximum work from information on the position of a Brownian particle. We show that the work produced by the engine achieves a bound set by a generalized second law of thermodynamics, demonstrating for the first time the sharpness of this bound. We validate a generalized Jarzynski equality for error-free feedback-controlled information engines.

  4. Can the impact of gender equality on health be measured? a cross-sectional study comparing measures based on register data with individual survey-based data

    PubMed Central

    2012-01-01

    Background The aim of this study was to investigate potential associations between gender equality at work and self-rated health. Methods 2861 employees in 21 companies were invited to participate in a survey. The mean response rate was 49.2%. The questionnaire contained 65 questions, mainly on gender equality and health. Two logistic regression analyses were conducted to assess associations between (i) self-rated health and a register-based company gender equality index (OGGI), and (ii) self-rated health and self-rated gender equality at work. Results Even though no association was found between the OGGI and health, women who rated their company as “completely equal” or “quite equal” had higher odds of reporting “good health” compared to women who perceived their company as “not equal” (OR = 2.8, 95% confidence interval = 1.4 – 5.5 and OR = 2.73, 95% CI = 1.6-4.6). Although not statistically significant, we observed the same trends in men. The results were adjusted for age, highest education level, income, full or part-time employment, and type of company based on the OGGI. Conclusions No association was found between gender equality in companies, measured by register-based index (OGGI), and health. However, perceived gender equality at work positively affected women’s self-rated health but not men’s. Further investigations are necessary to determine whether the results are fully credible given the contemporary health patterns and positions in the labour market of women and men or whether the results are driven by selection patterns. PMID:22985388

  5. PREVALENCE OF REFRACTIVE ERRORS IN MADRASSA STUDENTS OF HARIPUR DISTRICT.

    PubMed

    Atta, Zoia; Arif, Abdus Salam; Ahmed, Iftikhar; Farooq, Umer

    2015-01-01

    Visual impairment due to refractive errors is one of the most common problems among school-age children and is the second leading cause of treatable blindness. The Right to Sight, a global initiative launched by a coalition of non-government organizations and the World Health Organization (WHO), aims to eliminate avoidable visual impairment and blindness at a global level. In order to achieve this goal it is important to know the prevalence of different refractive errors in a community. Children and teenagers are the most susceptible groups to be affected by refractive errors. So, this population needs to be screened for different types of refractive errors. The study was done with the objective to find the frequency of different types of refractive errors in students of madrassas between the ages of 5-20 years in Haripur. This cross sectional study was done with 300 students between ages of 5-20 years in Madrassas of Haripur. The students were screened for refractive errors and the types of the errors were noted. After screening for refractive errors-the glasses were prescribed to the students. Myopia being 52.6% was the most frequent refractive error in students, followed by hyperopia 28.4% and astigmatism 19%. This study showed that myopia is an important problem in madrassa population. Females and males are almost equally affected. Spectacle correction of refractive errors is the cheapest and easy solution of this problem.

  6. Technological Advancements and Error Rates in Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less

  7. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  8. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  9. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  10. Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Fisher, Brad L.; Wolff, David B.

    2007-01-01

    This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.

  11. Dependence of the compensation error on the error of a sensor and corrector in an adaptive optics phase-conjugating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiyko, V V; Kislov, V I; Ofitserov, E N

    2015-08-31

    In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of themore » mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)« less

  12. Bio-Optical Data Assimilation With Observational Error Covariance Derived From an Ensemble of Satellite Images

    NASA Astrophysics Data System (ADS)

    Shulman, Igor; Gould, Richard W.; Frolov, Sergey; McCarthy, Sean; Penta, Brad; Anderson, Stephanie; Sakalaukus, Peter

    2018-03-01

    An ensemble-based approach to specify observational error covariance in the data assimilation of satellite bio-optical properties is proposed. The observational error covariance is derived from statistical properties of the generated ensemble of satellite MODIS-Aqua chlorophyll (Chl) images. The proposed observational error covariance is used in the Optimal Interpolation scheme for the assimilation of MODIS-Aqua Chl observations. The forecast error covariance is specified in the subspace of the multivariate (bio-optical, physical) empirical orthogonal functions (EOFs) estimated from a month-long model run. The assimilation of surface MODIS-Aqua Chl improved surface and subsurface model Chl predictions. Comparisons with surface and subsurface water samples demonstrate that data assimilation run with the proposed observational error covariance has higher RMSE than the data assimilation run with "optimistic" assumption about observational errors (10% of the ensemble mean), but has smaller or comparable RMSE than data assimilation run with an assumption that observational errors equal to 35% of the ensemble mean (the target error for satellite data product for chlorophyll). Also, with the assimilation of the MODIS-Aqua Chl data, the RMSE between observed and model-predicted fractions of diatoms to the total phytoplankton is reduced by a factor of two in comparison to the nonassimilative run.

  13. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  14. Prostate Localization on Daily Cone-Beam Computed Tomography Images: Accuracy Assessment of Similarity Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jinkoo, E-mail: jkim3@hfhs.or; Hammoud, Rabih; Pradhan, Deepak

    2010-07-15

    Purpose: To evaluate different similarity metrics (SM) using natural calcifications and observation-based measures to determine the most accurate prostate and seminal vesicle localization on daily cone-beam CT (CBCT) images. Methods and Materials: CBCT images of 29 patients were retrospectively analyzed; 14 patients with prostate calcifications (calcification data set) and 15 patients without calcifications (no-calcification data set). Three groups of test registrations were performed. Test 1: 70 CT/CBCT pairs from calcification dataset were registered using 17 SMs (6,580 registrations) and compared using the calcification mismatch error as an endpoint. Test 2: Using the four best SMs from Test 1, 75 CT/CBCTmore » pairs in the no-calcification data set were registered (300 registrations). Accuracy of contour overlays was ranked visually. Test 3: For the best SM from Tests 1 and 2, accuracy was estimated using 356 CT/CBCT registrations. Additionally, target expansion margins were investigated for generating registration regions of interest. Results: Test 1-Incremental sign correlation (ISC), gradient correlation (GC), gradient difference (GD), and normalized cross correlation (NCC) showed the smallest errors ({mu} {+-} {sigma}: 1.6 {+-} 0.9 {approx} 2.9 {+-} 2.1 mm). Test 2-Two of the three reviewers ranked GC higher. Test 3-Using GC, 96% of registrations showed <3-mm error when calcifications were filtered. Errors were left/right: 0.1 {+-} 0.5mm, anterior/posterior: 0.8 {+-} 1.0mm, and superior/inferior: 0.5 {+-} 1.1 mm. The existence of calcifications increased the success rate to 97%. Expansion margins of 4-10 mm were equally successful. Conclusion: Gradient-based SMs were most accurate. Estimated error was found to be <3 mm (1.1 mm SD) in 96% of the registrations. Results suggest that the contour expansion margin should be no less than 4 mm.« less

  15. Optimal preview control for a linear continuous-time stochastic control system in finite-time horizon

    NASA Astrophysics Data System (ADS)

    Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi

    2017-01-01

    This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.

  16. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  17. Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    1990-01-01

    A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.

  18. Differential transfer processes in incremental visuomotor adaptation.

    PubMed

    Seidler, Rachel D

    2005-01-01

    Visuomotor adaptive processes were examined by testing transfer of adaptation between similar conditions. Participants made manual aiming movements with a joystick to hit targets on a computer screen, with real-time feedback display of their movement. They adapted to three different rotations of the display in a sequential fashion, with a return to baseline display conditions between rotations. Adaptation was better when participants had prior adaptive experiences. When performance was assessed using direction error (calculated at the time of peak velocity) and initial endpoint error (error before any overt corrective actions), transfer was greater when the final rotation reflected an addition of previously experienced rotations (adaptation order 30 degrees rotation, 15 degrees, 45 degrees) than when it was a subtraction of previously experienced conditions (adaptation order 45 degrees rotation, 15 degrees, 30 degrees). Transfer was equal regardless of adaptation order when performance was assessed with final endpoint error (error following any discrete, corrective actions). These results imply the existence of multiple independent processes in visuomotor adaptation.

  19. Approximation of Bit Error Rates in Digital Communications

    DTIC Science & Technology

    2007-06-01

    and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase

  20. A bayesian approach to classification criteria for spectacled eiders

    USGS Publications Warehouse

    Taylor, B.L.; Wade, P.R.; Stehn, R.A.; Cochrane, J.F.

    1996-01-01

    To facilitate decisions to classify species according to risk of extinction, we used Bayesian methods to analyze trend data for the Spectacled Eider, an arctic sea duck. Trend data from three independent surveys of the Yukon-Kuskokwim Delta were analyzed individually and in combination to yield posterior distributions for population growth rates. We used classification criteria developed by the recovery team for Spectacled Eiders that seek to equalize errors of under- or overprotecting the species. We conducted both a Bayesian decision analysis and a frequentist (classical statistical inference) decision analysis. Bayesian decision analyses are computationally easier, yield basically the same results, and yield results that are easier to explain to nonscientists. With the exception of the aerial survey analysis of the 10 most recent years, both Bayesian and frequentist methods indicated that an endangered classification is warranted. The discrepancy between surveys warrants further research. Although the trend data are abundance indices, we used a preliminary estimate of absolute abundance to demonstrate how to calculate extinction distributions using the joint probability distributions for population growth rate and variance in growth rate generated by the Bayesian analysis. Recent apparent increases in abundance highlight the need for models that apply to declining and then recovering species.

  1. Video-Based Fingerprint Verification

    PubMed Central

    Qin, Wei; Yin, Yilong; Liu, Lili

    2013-01-01

    Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283

  2. ϱ0 production in deep inelastic μ-p interactions

    NASA Astrophysics Data System (ADS)

    Aubert, J. J.; Bassompierre, G.; Becks, K. H.; Benchouk, C.; Best, C.; Böhm, E.; de Bouard, X.; Brasse, F. W.; Broll, C.; Brown, S.; Carr, J.; Clifft, R. W.; Cobb, J. H.; Coignet, G.; Combley, F.; Court, G. R.; D'Agostini, G.; Dau, W. D.; Davies, J. K.; Déclais, Y.; Dosselli, U.; Drees, J.; Edwards, A.; Edwards, M.; Favier, J.; Ferrero, M. I.; Flauger, W.; Forsbach, H.; Gabathuler, E.; Gamet, R.; Gayler, J.; Gerhardt, V.; Gössling, C.; Haas, J.; Hamacher, K.; Hayman, P.; Henckes, M.; Korbel, V.; Landgraf, U.; Leenen, M.; Maire, M.; Minssieux, H.; Mohr, W.; Montgomery, H. E.; Moser, K.; Mount, R. P.; Nagy, E.; Nassalski, J.; Norton, P. R.; McNicholas, J.; Osborne, A. M.; Payre, P.; Peroni, C.; Pessard, H.; Pietrzyk, U.; Rith, K.; Schneegans, M.; Schneider, A.; Sloan, T.; Stier, H. E.; Stockhausen, W.; Thénard, J. M.; Thompson, J. C.; Urban, L.; Villers, M.; Wahlen, H.; Whalley, M.; Williams, D.; Williams, W. S. C.; Williamson, J.; Wimpenny, S. J.

    1983-12-01

    Inclusive ϱ0 meson production has been measured in 120 GeV and 280 GeV muon-proton interactions. Distributions of z and pT2 are presented. Primary ϱ0 production is found to be equal to that of π0 production within errors.

  3. Absorption of Solar Radiation by the Cloudy Atmosphere: Further Interpretations of Collocated Aircraft Measurements

    NASA Technical Reports Server (NTRS)

    Cess, R. D.; Zhang, Minghua; Valero, Francisco P. J.; Pope, Shelly K.; Bucholtz, Anthony; Bush, Brett; Zender, Charles S.

    1998-01-01

    We have extended the interpretations made in two prior studies of the aircraft shortwave radiation measurements that were obtained as part of the Atmospheric Radiation Measurements (ARM) Enhanced Shortwave Experiments (ARESE). These extended interpretations use the 500 nm (10 nm bandwidth) measurements to minimize sampling errors in the broadband measurements. It is indicated that the clouds present during this experiment absorb more shortwave radiation than predicted by clear skies and thus by theoretical models, that at least some (less than or equal to 20%) of this enhanced cloud absorption occurs at wavelengths less than 680 nm, and that the observed cloud absorption does not appear to be an artifact of sampling errors nor of instrument calibration errors.

  4. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Kim, Isaac H.

    2011-05-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  5. Effects of motion base and g-seat cueing of simulator pilot performance

    NASA Technical Reports Server (NTRS)

    Ashworth, B. R.; Mckissick, B. T.; Parrish, R. V.

    1984-01-01

    In order to measure and analyze the effects of a motion plus g-seat cueing system, a manned-flight-simulation experiment was conducted utilizing a pursuit tracking task and an F-16 simulation model in the NASA Langley visual/motion simulator. This experiment provided the information necessary to determine whether motion and g-seat cues have an additive effect on the performance of this task. With respect to the lateral tracking error and roll-control stick force, the answer is affirmative. It is shown that presenting the two cues simultaneously caused significant reductions in lateral tracking error and that using the g-seat and motion base separately provided essentially equal reductions in the pilot's lateral tracking error.

  6. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery☆

    PubMed Central

    Arba-Mosquera, Samuel; Aslanides, Ioannis M.

    2012-01-01

    Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  7. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  8. Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System

    PubMed Central

    Yu, Fei; Sun, Qian

    2014-01-01

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115

  9. Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.

    PubMed

    Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D

    2016-10-01

    Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  10. Shape space figure-8 solution of three body problem with two equal masses

    NASA Astrophysics Data System (ADS)

    Yu, Guowei

    2017-06-01

    In a preprint by Montgomery (https://people.ucsc.edu/~rmont/Nbdy.html), the author attempted to prove the existence of a shape space figure-8 solution of the Newtonian three body problem with two equal masses (it looks like a figure 8 in the shape space, which is different from the famous figure-8 solution with three equal masses (Chenciner and Montgomery 2000 Ann. Math. 152 881-901)). Unfortunately there is an error in the proof and the problem is still open. Consider the α-homogeneous Newton-type potential, 1/rα, using action minimization method, we prove the existence of this solution, for α \\in (1, 2) ; for α=1 (the Newtonian potential), an extra condition is required, which unfortunately seems hard to verify at this moment.

  11. A priority dispatch system for emergency medical services.

    PubMed

    Slovis, C M; Carruth, T B; Seitz, W J; Thomas, C M; Elsea, W R

    1985-11-01

    A decision tree priority dispatch system for emergency medical services (EMS) was developed and implemented in Atlanta and Fulton County, Georgia. The dispatch system shortened the average response time from 14.2 minutes to 10.4 minutes for the 30% of patients deemed most urgent (P less than or equal to .05); resulted in a significant increase in the use of advanced life support units for this group (P less than or equal to .02); decreased the number of calls that required a backup ambulance service; and significantly increased conformity to national EMS response time standards for critically ill and injured patients (P less than or equal to .0009). Due to dispatch error, 0.3% of calls were dispatched as least severe but subsequently were found to be most urgent.

  12. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System.

    PubMed

    Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang

    2018-05-04

    The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.

  13. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  14. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  15. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  16. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  17. Impact of an antiretroviral stewardship strategy on medication error rates.

    PubMed

    Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E

    2018-05-02

    The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  18. Meiotic Divisions: No Place for Gender Equality.

    PubMed

    El Yakoubi, Warif; Wassmann, Katja

    2017-01-01

    In multicellular organisms the fusion of two gametes with a haploid set of chromosomes leads to the formation of the zygote, the first cell of the embryo. Accurate execution of the meiotic cell division to generate a female and a male gamete is required for the generation of healthy offspring harboring the correct number of chromosomes. Unfortunately, meiosis is error prone. This has severe consequences for fertility and under certain circumstances, health of the offspring. In humans, female meiosis is extremely error prone. In this chapter we will compare male and female meiosis in humans to illustrate why and at which frequency errors occur, and describe how this affects pregnancy outcome and health of the individual. We will first introduce key notions of cell division in meiosis and how they differ from mitosis, followed by a detailed description of the events that are prone to errors during the meiotic divisions.

  19. The error-related negativity as a state and trait measure: motivation, personality, and ERPs in response to errors.

    PubMed

    Pailing, Patricia E; Segalowitz, Sidney J

    2004-01-01

    This study examines changes in the error-related negativity (ERN/Ne) related to motivational incentives and personality traits. ERPs were gathered while adults completed a four-choice letter task during four motivational conditions. Monetary incentives for finger and hand accuracy were altered across motivation conditions to either be equal or favor one type of accuracy over the other in a 3:1 ratio. Larger ERN/Ne amplitudes were predicted with increased incentives, with personality moderating this effect. Results were as expected: Individuals higher on conscientiousness displayed smaller motivation-related changes in the ERN/Ne. Similarly, those low on neuroticism had smaller effects, with the effect of Conscientiousness absent after accounting for Neuroticism. These results emphasize an emotional/evaluative function for the ERN/Ne, and suggest that the ability to selectively invest in error monitoring is moderated by underlying personality.

  20. Error-Trellis Construction for Convolutional Codes Using Shifted Error/Syndrome-Subsequences

    NASA Astrophysics Data System (ADS)

    Tajima, Masato; Okino, Koji; Miyagoshi, Takashi

    In this paper, we extend the conventional error-trellis construction for convolutional codes to the case where a given check matrix H(D) has a factor Dl in some column (row). In the first case, there is a possibility that the size of the state space can be reduced using shifted error-subsequences, whereas in the second case, the size of the state space can be reduced using shifted syndrome-subsequences. The construction presented in this paper is based on the adjoint-obvious realization of the corresponding syndrome former HT(D). In the case where all the columns and rows of H(D) are delay free, the proposed construction is reduced to the conventional one of Schalkwijk et al. We also show that the proposed construction can equally realize the state-space reduction shown by Ariel et al. Moreover, we clarify the difference between their construction and that of ours using examples.

  1. Single-particle trajectories reveal two-state diffusion-kinetics of hOGG1 proteins on DNA.

    PubMed

    Vestergaard, Christian L; Blainey, Paul C; Flyvbjerg, Henrik

    2018-03-16

    We reanalyze trajectories of hOGG1 repair proteins diffusing on DNA. A previous analysis of these trajectories with the popular mean-squared-displacement approach revealed only simple diffusion. Here, a new optimal estimator of diffusion coefficients reveals two-state kinetics of the protein. A simple, solvable model, in which the protein randomly switches between a loosely bound, highly mobile state and a tightly bound, less mobile state is the simplest possible dynamic model consistent with the data. It yields accurate estimates of hOGG1's (i) diffusivity in each state, uncorrupted by experimental errors arising from shot noise, motion blur and thermal fluctuations of the DNA; (ii) rates of switching between states and (iii) rate of detachment from the DNA. The protein spends roughly equal time in each state. It detaches only from the loosely bound state, with a rate that depends on pH and the salt concentration in solution, while its rates for switching between states are insensitive to both. The diffusivity in the loosely bound state depends primarily on pH and is three to ten times higher than in the tightly bound state. We propose and discuss some new experiments that take full advantage of the new tools of analysis presented here.

  2. The reaction H + C4H2 - Absolute rate constant measurement and implication for atmospheric modeling of Titan

    NASA Technical Reports Server (NTRS)

    Nava, D. F.; Mitchell, M. B.; Stief, L. J.

    1986-01-01

    The absolute rate constant for the reaction H + C4H2 has been measured over the temperature (T) interval 210-423 K, using the technique of flash photolysis-resonance fluorescence. At each of the five temperatures employed, the results were independent of variations in C4H2 concentration, total pressure of Ar or N2, and flash intensity (i.e., the initial H concentration). The rate constant, k, was found to be equal to 1.39 x 10 to the -10th exp (-1184/T) cu cm/s, with an error of one standard deviation. The Arrhenius parameters at the high pressure limit determined here for the H + C4H2 reaction are consistent with those for the corresponding reactions of H with C2H2 and C3H4. Implications of the kinetic carbon chemistry results, particularly those at low temperature, are considered for models of the atmospheric carbon chemistry of Titan. The rate of this reaction, relative to that of the analogous, but slower, reaction of H + C2H2, appears to make H + C4H2 a very feasible reaction pathway for effective conversion of H atoms to molecular hydrogen in the stratosphere of Titan.

  3. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  4. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    PubMed

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  5. In vitro evaluation of Augmentin by broth microdilution and disk diffusion susceptibility testing: regression analysis, tentative interpretive criteria, and quality control limits.

    PubMed Central

    Fuchs, P C; Barry, A L; Thornsberry, C; Gavan, T L; Jones, R N

    1983-01-01

    Augmentin (Beecham Laboratories, Bristol, Tenn.), a combination drug consisting of two parts amoxicillin to one part clavulanic acid and a potent beta-lactamase inhibitor, was evaluated in vitro in comparison with ampicillin or amoxicillin or both for its inhibitory and bactericidal activities against selected clinical isolates. Regression analysis was performed and tentative disk diffusion susceptibility breakpoints were determined. A multicenter performance study of the disk diffusion test was conducted with three quality control organisms to determine tentative quality control limits. All methicillin-susceptible staphylococci and Haemophilus influenzae isolates were susceptible to Augmentin, although the minimal inhibitory concentrations for beta-lactamase-producing strains of both groups were, on the average, fourfold higher than those for enzyme-negative strains. Among the Enterobacteriaceae, Augmentin exhibited significantly greater activity than did ampicillin against Klebsiella pneumoniae, Citrobacter diversus, Proteus vulgaris, and about one-third of the Escherichia coli strains tested. Bactericidal activity usually occurred at the minimal inhibitory concentration. There was a slight inoculum concentration effect on the Augmentin minimal inhibitory concentrations. On the basis of regression and error rate-bounded analyses, the suggested interpretive disk diffusion susceptibility breakpoints for Augmentin are: susceptible, greater than or equal to 18 mm; resistant, less than or equal to 13 mm (gram-negative bacilli); and susceptible, greater than or equal to 20 mm (staphylococci and H. influenzae). The use of a beta-lactamase-producing organism, such as E. coli Beecham 1532, is recommended for quality assurance of Augmentin susceptibility testing. PMID:6625554

  6. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  7. Acute antidepressant effects of right unilateral ultra-brief ECT: a double-blind randomised controlled trial.

    PubMed

    Mayur, Prashanth; Byth, Karen; Harris, Anthony

    2013-07-01

    Shortening the pulse width to 0.3 ms holds neurophysiological and clinical promise of making ECT safer by limiting cognitive side effects. However, the antidepressant effects of right ultra-brief unilateral ECT are under contention. In an acute ECT course, antidepressant equivalence of ultra-brief right unilateral ECT to the high-dose brief pulse right unilateral ECT was investigated. Severely depressed patients were randomised to 1 ms-brief pulse (n=18) or 0.3 ms ultra-brief pulse (n=17) right unilateral ECT, both at high-dose (6 times threshold stimulus dose) given thrice weekly. Depression severity was measured using the Montgomery Asberg Depression Rating Scale at baseline, after 8 treatments and after the acute course of ECT. Depression severity declined equally in both groups: F (1.27,41.97)=0.31, p=0.63. Median time in days to remission (95%CI) was in brief pulse ECT: 26 (18.6-33.4) and ultra-brief pulse ECT:28 (17.9-38.0). The small sample study in the study increases the likelihood of type 2 error. In severe depression, high-dose ultra-brief right unilateral ECT appears to show matching acute antidepressant response to an equally high-dose brief pulse right unilateral ECT. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Errors in laboratory medicine: practical lessons to improve patient safety.

    PubMed

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.

  9. Sensitivity of planetary cruise navigation to earth orientation calibration errors

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Folkner, W. M.

    1995-01-01

    A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.

  10. The statistical validity of nursing home survey findings.

    PubMed

    Woolley, Douglas C

    2011-11-01

    The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.

  11. How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?

    PubMed Central

    Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina

    2015-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615

  12. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    PubMed

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    2014-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.

  13. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.

    PubMed

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.

  14. Holographic Alignment Breadboard

    DTIC Science & Technology

    1982-05-01

    collimating lens adjustments (Figure 4). Focutsing error can be deleted by adjusting the collimating lens group along its optical axis. A lateral adjustment...approximately equal through a suitable choice of the ’ : ouvoomo u IASEI # IEXPANDE~.R q mm WI I Tllrr NEUIM FigreIS.HOABOptem PLheATic ’rltNCOLLIMATIN LOS1 q U

  15. Microcomputers and Stimulus Control: From the Laboratory to the Classroom.

    ERIC Educational Resources Information Center

    LeBlanc, Judith M.; And Others

    1985-01-01

    The need for developing a technology of teaching that equals current sophistication of microcomputer technology is addressed. The importance of principles of learning and behavior analysis is emphasized. Potential roles of stimulus control and precise error analysis in educational program development and in prescription of specific learning…

  16. A Comparison Study of Item Exposure Control Strategies in MCAT

    ERIC Educational Resources Information Center

    Mao, Xiuzhen; Ozdemir, Burhanettin; Wang, Yating; Xiu, Tao

    2016-01-01

    Four item selection indexes with and without exposure control are evaluated and compared in multidimensional computerized adaptive testing (CAT). The four item selection indices are D-optimality, Posterior expectation Kullback-Leibler information (KLP), the minimized error variance of the linear combination score with equal weight (V1), and the…

  17. Two Philosophical Errors Concerning School Choice.

    ERIC Educational Resources Information Center

    Brighouse, Harry

    1997-01-01

    Argues, in contrast to David Hargreaves, that libertarianism implies a mild presumption against school choice, and that notions of common good are significant to educational decision making only when deciding between sets of institutions that perform equally well at delivering their obligations. Links these issues to questions about school choice.…

  18. Optimization of Parameters for Manufacture Nanopowder Bioceramics at Machine Pulverisette 6 by Taguchi and ANOVA Method

    NASA Astrophysics Data System (ADS)

    Van Hoten, Hendri; Gunawarman; Mulyadi, Ismet Hari; Kurniawan Mainil, Afdhal; Putra, Bismantoloa dan

    2018-02-01

    This research is about manufacture nanopowder Bioceramics from local materials used Ball Milling for biomedical applications. Source materials for the manufacture of medicines are plants, animal tissues, microbial structures and engineering biomaterial. The form of raw material medicines is a powder before mixed. In the case of medicines, research is to find sources of biomedical materials that will be in the nanoscale powders can be used as raw material for medicine. One of the biomedical materials that can be used as raw material for medicine is of the type of bioceramics is chicken eggshells. This research will develop methods for manufacture nanopowder material from chicken eggshells with Ball Milling using the Taguchi method and ANOVA. Eggshell milled using a variation of Milling rate on 150, 200 and 250 rpm, the time variation of 1, 2 and 3 hours and variations the grinding balls to eggshell powder weight ratio (BPR) 1: 6, 1: 8, 1: 10. Before milled with Ball Milling crushed eggshells in advance and calcinate to a temperature of 900°C. After the milled material characterization of the fine powder of eggshell using SEM to see its size. The result of this research is optimum parameter of Taguchi Design analysis that is 250 rpm milling rate, 3 hours milling time and BPR is 1: 6 with the average eggshell powder size is 1.305 μm. Milling speed, milling time and ball to powder weight of ratio have contribution successively equal to 60.82%, 30.76% and 6.64% by error equal to 1.78%.

  19. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less

  20. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  1. 7 CFR 275.23 - Determination of State agency program performance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...

  2. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    PubMed

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  3. Elevation Change of the Southern Greenland Ice Sheet from Satellite Radar Altimeter Data

    NASA Technical Reports Server (NTRS)

    Haines, Bruce J.

    1999-01-01

    Long-term changes in the thickness of the polar ice sheets are important indicators of climate change. Understanding the contributions to the global water mass balance from the accumulation or ablation of grounded ice in Greenland and Antarctica is considered crucial for determining the source of the about 2 mm/yr sea-level rise in the last century. Though the Antarctic ice sheet is much larger than its northern counterpart, the Greenland ice sheet is more likely to undergo dramatic changes in response to a warming trend. This can be attributed to the warmer Greenland climate, as well as a potential for amplification of a global warming trend in the polar regions of the Northern Hemisphere. In collaboration with Drs. Curt Davis and Craig Kluever of the University of Missouri, we are using data from satellite radar altimeters to measure changes in the elevation of the Southern Greenland ice sheet from 1978 to the present. Difficulties with systematic altimeter measurement errors, particularly in intersatellite comparisons, beset earlier studies of the Greenland ice sheet thickness. We use altimeter data collected contemporaneously over the global ocean to establish a reference for correcting ice-sheet data. In addition, the waveform data from the ice-sheet radar returns are reprocessed to better determine the range from the satellite to the ice surface. At JPL, we are focusing our efforts principally on the reduction of orbit errors and range biases in the measurement systems on the various altimeter missions. Our approach emphasizes global characterization and reduction of the long-period orbit errors and range biases using altimeter data from NASA's Ocean Pathfinder program. Along-track sea-height residuals are sequentially filtered and backwards smoothed, and the radial orbit errors are modeled as sinusoids with a wavelength equal to one revolution of the satellite. The amplitudes of the sinusoids are treated as exponentially-correlated noise processes with a time-constant of six days. Measurement errors (e.g., altimeter range bias) are simultaneously recovered as constant parameters. The corrections derived from the global ocean analysis are then applied over the Greenland ice sheet. The orbit error and measurement bias corrections for different missions are developed in a single framework to enable robust linkage of ice-sheet measurements from 1978 to the present. In 1998, we completed our re-evaluation of the 1978 Seasat and 1985-1989 Geosat Exact Repeat Mission data. The estimates of ice thickness over Southern Greenland (south of 72N and above 2000 m) from 1978 to 1988 show large regional variations (+/-18 cm/yr), but yield an overall rate of +1.5 +/- 0.5 cm/yr (one standard error). Accounting for systematic errors, the estimate may not be significantly different from the null growth rate. The average elevation change from 1978 to 1988 is too small to assess whether the Greenland ice sheet is undergoing a long-term change.

  4. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  5. Estimation of Cyclic Shift with Delayed Correlation and Matched Filtering in Time Domain Cyclic-SLM for PAPR Reduction

    PubMed Central

    2016-01-01

    Time domain cyclic-selective mapping (TDC-SLM) reduces the peak-to-average power ratio (PAPR) in OFDM systems while the amounts of cyclic shifts are required to recover the transmitted signal in a receiver. One of the critical issues of the SLM scheme is sending the side information (SI) which reduces the throughputs in wireless OFDM systems. The proposed scheme implements delayed correlation and matched filtering (DC-MF) to estimate the amounts of the cyclic shifts in the receiver. In the proposed scheme, the DC-MF is placed after the frequency domain equalization (FDE) to improve the accuracy of cyclic shift estimation. The accuracy rate of the propose scheme reaches 100% at E b/N 0 = 5 dB and the bit error rate (BER) improves by 0.2 dB as compared with the conventional TDC-SLM. The BER performance of the proposed scheme is also better than that of the conventional TDC-SLM even though a nonlinear high power amplifier is assumed. PMID:27752539

  6. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  7. Nonlinearity-aware 200  Gbit/s DMT transmission for C-band short-reach optical interconnects with a single packaged electro-absorption modulated laser.

    PubMed

    Zhang, Lu; Hong, Xuezhi; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Schatz, Richard; Guo, Changjian; Zhang, Junwei; Nordwall, Fredrik; Engenhardt, Klaus M; Westergren, Urban; Popov, Sergei; Jacobsen, Gunnar; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia

    2018-01-15

    We experimentally demonstrate the transmission of a 200 Gbit/s discrete multitone (DMT) at the soft forward error correction limit in an intensity-modulation direct-detection system with a single C-band packaged distributed feedback laser and traveling-wave electro absorption modulator (DFB-TWEAM), digital-to-analog converter and photodiode. The bit-power loaded DMT signal is transmitted over 1.6 km standard single-mode fiber with a net rate of 166.7 Gbit/s, achieving an effective electrical spectrum efficiency of 4.93 bit/s/Hz. Meanwhile, net rates of 174.2 Gbit/s and 179.5 Gbit/s are also demonstrated over 0.8 km SSMF and in an optical back-to-back case, respectively. The feature of the packaged DFB-TWEAM is presented. The nonlinearity-aware digital signal processing algorithm for channel equalization is mathematically described, which improves the signal-to-noise ratio up to 3.5 dB.

  8. Multipath interference test method for distributed amplifiers

    NASA Astrophysics Data System (ADS)

    Okada, Takahiro; Aida, Kazuo

    2005-12-01

    A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.

  9. Improving the quality of cognitive screening assessments: ACEmobile, an iPad-based version of the Addenbrooke's Cognitive Examination-III.

    PubMed

    Newman, Craig G J; Bevins, Adam D; Zajicek, John P; Hodges, John R; Vuillermoz, Emil; Dickenson, Jennifer M; Kelly, Denise S; Brown, Simona; Noad, Rupert F

    2018-01-01

    Ensuring reliable administration and reporting of cognitive screening tests are fundamental in establishing good clinical practice and research. This study captured the rate and type of errors in clinical practice, using the Addenbrooke's Cognitive Examination-III (ACE-III), and then the reduction in error rate using a computerized alternative, the ACEmobile app. In study 1, we evaluated ACE-III assessments completed in National Health Service (NHS) clinics ( n  = 87) for administrator error. In study 2, ACEmobile and ACE-III were then evaluated for their ability to capture accurate measurement. In study 1, 78% of clinically administered ACE-IIIs were either scored incorrectly or had arithmetical errors. In study 2, error rates seen in the ACE-III were reduced by 85%-93% using ACEmobile. Error rates are ubiquitous in routine clinical use of cognitive screening tests and the ACE-III. ACEmobile provides a framework for supporting reduced administration, scoring, and arithmetical error during cognitive screening.

  10. Documentation of study medication dispensing in a prospective large randomized clinical trial: experiences from the ARISTOTLE Trial.

    PubMed

    Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B

    2013-09-01

    In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.

  11. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  12. Prescribing errors during hospital inpatient care: factors influencing identification by pharmacists.

    PubMed

    Tully, Mary P; Buchan, Iain E

    2009-12-01

    To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.

  13. Gender equality and women's absolute status: a test of the feminist models of rape.

    PubMed

    Martin, Kimberly; Vieraitis, Lynne M; Britto, Sarah

    2006-04-01

    Feminist theory predicts both a positive and negative relationship between gender equality and rape rates. Although liberal and radical feminist theory predicts that gender equality should ameliorate rape victimization, radical feminist theorists have argued that gender equality may increase rape in the form of male backlash. Alternatively, Marxist criminologists focus on women's absolute socioeconomic status rather than gender equality as a predictor of rape rates, whereas socialist feminists combine both radical and Marxist perspectives. This study uses factor analysis to overcome multicollinearity limitations of past studies while exploring the relationship between women's absolute and relative socioeconomic status on rape rates in major U.S. cities using 2000 census data. The findings indicate support for both the Marxist and radical feminist explanations of rape but no support for the ameliorative hypothesis. These findings support a more inclusive socialist feminist theory that takes both Marxist and radical feminist hypotheses into account.

  14. Short-term and long-term effects of GDP on traffic deaths in 18 OECD countries, 1960-2011.

    PubMed

    Dadgar, Iman; Norström, Thor

    2017-02-01

    Research suggests that increases in gross domestic product (GDP) lead to increases in traffic deaths plausibly due to the increased road traffic induced by an expanding economy. However, there also seems to exist a long-term effect of economic growth that is manifested in improved traffic safety and reduced rates of traffic deaths. Previous studies focus on either the short-term, procyclical effect, or the long-term, protective effect. The aim of the present study is to estimate the short-term and long-term effects jointly in order to assess the net impact of GDP on traffic mortality. We extracted traffic death rates for the period 1960-2011 from the WHO Mortality Database for 18 OECD countries. Data on GDP/capita were obtained from the Maddison Project. We performed error correction modelling to estimate the short-term and long-term effects of GDP on the traffic death rates. The estimates from the error correction modelling for the entire study period suggested that a one-unit increase (US$1000) in GDP/capita yields an instantaneous short-term increase in the traffic death rate by 0.58 (p<0.001), and a long-term decrease equal to -1.59 (p<0.001). However, period-specific analyses revealed a structural break implying that the procyclical effect outweighs the protective effect in the period prior to 1976, whereas the reverse is true for the period 1976-2011. An increase in GDP leads to an immediate increase in traffic deaths. However, after the mid-1970s this short-term effect is more than outweighed by a markedly stronger protective long-term effect, whereas the reverse is true for the period before the mid-1970s. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  15. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  16. Fiscal decentralisation and infant mortality rate: the Colombian case.

    PubMed

    Soto, Victoria Eugenia; Farfan, Maria Isabel; Lorant, Vincent

    2012-05-01

    There is a paucity of research analysing the influence of fiscal decentralisation on health outcomes. Colombia is an interesting case study, as health expenditure there has been decentralising since 1993, leading to an improvement in health care insurance. However, it is unclear whether fiscal decentralisation has improved population health. We assess the effect of fiscal decentralisation of health expenditure on infant mortality rates in Colombia. Infant mortality rates for 1080 municipalities over a 10-year period (1998-2007) were related to fiscal decentralisation by using an unbalanced fixed-effect regression model with robust errors. Fiscal decentralisation was measured as the locally controlled health expenditure as a proportion of total health expenditure. We also evaluated the effect of transfers from central government and municipal institutional capacity. In addition, we compared the effect of fiscal decentralisation at different levels of municipal poverty. Fiscal decentralisation decreased infant mortality rates (the elasticity was equal to -0.06). However, this effect was stronger in non-poor municipalities (-0.12) than poor ones (-0.081). We conclude that decentralising the fiscal allocation of responsibilities to municipalities decreased infant mortality rates. However, this improved health outcome effect depended greatly on the socio-economic conditions of the localities. The policy instrument used by the Health Minister to evaluate municipal institutional capacity in the health sector needs to be revised. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    PubMed

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  18. Error identification, disclosure, and reporting: practice patterns of three emergency medicine provider types.

    PubMed

    Hobgood, Cherri; Xie, Jipan; Weiner, Bryan; Hooker, James

    2004-02-01

    To gather preliminary data on how the three major types of emergency medicine (EM) providers, physicians, nurses (RNs), and out-of-hospital personnel (EMTs), differ in error identification, disclosure, and reporting. A convenience sample of emergency department (ED) providers completed a brief survey designed to evaluate error frequency, disclosure, and reporting practices as well as error-based discussion and educational activities. One hundred sixteen subjects participated: 41 EMTs (35%), 33 RNs (28%), and 42 physicians (36%). Forty-five percent of EMTs, 56% of RNs, and 21% of physicians identified no clinical errors during the preceding year. When errors were identified, physicians learned of them via dialogue with RNs (58%), patients (13%), pharmacy (35%), and attending physicians (35%). For known errors, all providers were equally unlikely to inform the team caring for the patient. Disclosure to patients was limited and varied by provider type (19% EMTs, 23% RNs, and 74% physicians). Disclosure education was rare, with

  19. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System

    PubMed Central

    Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang

    2018-01-01

    The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions. PMID:29734707

  20. Accuracy of cited “facts” in medical research articles: A review of study methodology and recalculation of quotation error rate

    PubMed Central

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404

  1. Accuracy of cited "facts" in medical research articles: A review of study methodology and recalculation of quotation error rate.

    PubMed

    Mogull, Scott A

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).

  2. The Relationship between Occurrence Timing of Dispensing Errors and Subsequent Danger to Patients under the Situation According to the Classification of Drugs by Efficacy.

    PubMed

    Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro

    2016-01-01

    There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.

  3. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    ERIC Educational Resources Information Center

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  4. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    ERIC Educational Resources Information Center

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  5. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency

    ERIC Educational Resources Information Center

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 second and 974 third graders. Results found a significant relationship between error rate, oral reading fluency, and reading comprehension performance, and…

  6. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  7. Predicting tropical cyclone intensity using satellite measured equivalent blackbody temperatures of cloud tops. [regression analysis

    NASA Technical Reports Server (NTRS)

    Gentry, R. C.; Rodgers, E.; Steranka, J.; Shenk, W. E.

    1978-01-01

    A regression technique was developed to forecast 24 hour changes of the maximum winds for weak (maximum winds less than or equal to 65 Kt) and strong (maximum winds greater than 65 Kt) tropical cyclones by utilizing satellite measured equivalent blackbody temperatures around the storm alone and together with the changes in maximum winds during the preceding 24 hours and the current maximum winds. Independent testing of these regression equations shows that the mean errors made by the equations are lower than the errors in forecasts made by the peristence techniques.

  8. Secure Hashing of Dynamic Hand Signatures Using Wavelet-Fourier Compression with BioPhasor Mixing and [InlineEquation not available: see fulltext.] Discretization

    NASA Astrophysics Data System (ADS)

    Wai Kuan, Yip; Teoh, Andrew B. J.; Ngo, David C. L.

    2006-12-01

    We introduce a novel method for secure computation of biometric hash on dynamic hand signatures using BioPhasor mixing and[InlineEquation not available: see fulltext.] discretization. The use of BioPhasor as the mixing process provides a one-way transformation that precludes exact recovery of the biometric vector from compromised hashes and stolen tokens. In addition, our user-specific[InlineEquation not available: see fulltext.] discretization acts both as an error correction step as well as a real-to-binary space converter. We also propose a new method of extracting compressed representation of dynamic hand signatures using discrete wavelet transform (DWT) and discrete fourier transform (DFT). Without the conventional use of dynamic time warping, the proposed method avoids storage of user's hand signature template. This is an important consideration for protecting the privacy of the biometric owner. Our results show that the proposed method could produce stable and distinguishable bit strings with equal error rates (EERs) of[InlineEquation not available: see fulltext.] and[InlineEquation not available: see fulltext.] for random and skilled forgeries for stolen token (worst case) scenario, and[InlineEquation not available: see fulltext.] for both forgeries in the genuine token (optimal) scenario.

  9. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  10. Decrease in medical command errors with use of a "standing orders" protocol system.

    PubMed

    Holliman, C J; Wuerz, R C; Meador, S A

    1994-05-01

    The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Quantifying Data Quality for Clinical Trials Using Electronic Data Capture

    PubMed Central

    Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.

    2008-01-01

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958

  12. An Iterated Global Mascon Solution with Focus on Land Ice Mass Evolution

    NASA Technical Reports Server (NTRS)

    Luthcke, S. B.; Sabaka, T.; Rowlands, D. D.; Lemoine, F. G.; Loomis, B. D.; Boy, J. P.

    2012-01-01

    Land ice mass evolution is determined from a new GRACE global mascon solution. The solution is estimated directly from the reduction of the inter-satellite K-band range rate observations taking into account the full noise covariance, and formally iterating the solution. The new solution increases signal recovery while reducing the GRACE KBRR observation residuals. The mascons are estimated with 10-day and 1-arc-degree equal area sampling, applying anisotropic constraints for enhanced temporal and spatial resolution of the recovered land ice signal. The details of the solution are presented including error and resolution analysis. An Ensemble Empirical Mode Decomposition (EEMD) adaptive filter is applied to the mascon solution time series to compute timing of balance seasons and annual mass balances. The details and causes of the spatial and temporal variability of the land ice regions studied are discussed.

  13. Multi-modulus algorithm based on global artificial fish swarm intelligent optimization of DNA encoding sequences.

    PubMed

    Guo, Y C; Wang, H; Wu, H P; Zhang, M Q

    2015-12-21

    Aimed to address the defects of the large mean square error (MSE), and the slow convergence speed in equalizing the multi-modulus signals of the constant modulus algorithm (CMA), a multi-modulus algorithm (MMA) based on global artificial fish swarm (GAFS) intelligent optimization of DNA encoding sequences (GAFS-DNA-MMA) was proposed. To improve the convergence rate and reduce the MSE, this proposed algorithm adopted an encoding method based on DNA nucleotide chains to provide a possible solution to the problem. Furthermore, the GAFS algorithm, with its fast convergence and global search ability, was used to find the best sequence. The real and imaginary parts of the initial optimal weight vector of MMA were obtained through DNA coding of the best sequence. The simulation results show that the proposed algorithm has a faster convergence speed and smaller MSE in comparison with the CMA, the MMA, and the AFS-DNA-MMA.

  14. Personal authentication using hand vein triangulation and knuckle shape.

    PubMed

    Kumar, Ajay; Prathyusha, K Venkata

    2009-09-01

    This paper presents a new approach to authenticate individuals using triangulation of hand vein images and simultaneous extraction of knuckle shape information. The proposed method is fully automated and employs palm dorsal hand vein images acquired from the low-cost, near infrared, contactless imaging. The knuckle tips are used as key points for the image normalization and extraction of region of interest. The matching scores are generated in two parallel stages: (i) hierarchical matching score from the four topologies of triangulation in the binarized vein structures and (ii) from the geometrical features consisting of knuckle point perimeter distances in the acquired images. The weighted score level combination from these two matching scores are used to authenticate the individuals. The achieved experimental results from the proposed system using contactless palm dorsal-hand vein images are promising (equal error rate of 1.14%) and suggest more user friendly alternative for user identification.

  15. A comparative study of restricted randomization procedures for multiarm trials with equal or unequal treatment allocation ratios.

    PubMed

    Ryeznik, Yevgen; Sverdlov, Oleksandr

    2018-06-04

    Randomization designs for multiarm clinical trials are increasingly used in practice, especially in phase II dose-ranging studies. Many new methods have been proposed in the literature; however, there is lack of systematic, head-to-head comparison of the competing designs. In this paper, we systematically investigate statistical properties of various restricted randomization procedures for multiarm trials with fixed and possibly unequal allocation ratios. The design operating characteristics include measures of allocation balance, randomness of treatment assignments, variations in the allocation ratio, and statistical characteristics such as type I error rate and power. The results from the current paper should help clinical investigators select an appropriate randomization procedure for their clinical trial. We also provide a web-based R shiny application that can be used to reproduce all results in this paper and run simulations under additional user-defined experimental scenarios. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Transmission over UWB channels with OFDM system using LDPC coding

    NASA Astrophysics Data System (ADS)

    Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech

    2009-06-01

    Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.

  17. Biometric recognition via fixation density maps

    NASA Astrophysics Data System (ADS)

    Rigas, Ioannis; Komogortsev, Oleg V.

    2014-05-01

    This work introduces and evaluates a novel eye movement-driven biometric approach that employs eye fixation density maps for person identification. The proposed feature offers a dynamic representation of the biometric identity, storing rich information regarding the behavioral and physical eye movement characteristics of the individuals. The innate ability of fixation density maps to capture the spatial layout of the eye movements in conjunction with their probabilistic nature makes them a particularly suitable option as an eye movement biometrical trait in cases when free-viewing stimuli is presented. In order to demonstrate the effectiveness of the proposed approach, the method is evaluated on three different datasets containing a wide gamut of stimuli types, such as static images, video and text segments. The obtained results indicate a minimum EER (Equal Error Rate) of 18.3 %, revealing the perspectives on the utilization of fixation density maps as an enhancing biometrical cue during identification scenarios in dynamic visual environments.

  18. A novel chaotic stream cipher and its application to palmprint template protection

    NASA Astrophysics Data System (ADS)

    Li, Heng-Jian; Zhang, Jia-Shu

    2010-04-01

    Based on a coupled nonlinear dynamic filter (NDF), a novel chaotic stream cipher is presented in this paper and employed to protect palmprint templates. The chaotic pseudorandom bit generator (PRBG) based on a coupled NDF, which is constructed in an inverse flow, can generate multiple bits at one iteration and satisfy the security requirement of cipher design. Then, the stream cipher is employed to generate cancelable competitive code palmprint biometrics for template protection. The proposed cancelable palmprint authentication system depends on two factors: the palmprint biometric and the password/token. Therefore, the system provides high-confidence and also protects the user's privacy. The experimental results of verification on the Hong Kong PolyU Palmprint Database show that the proposed approach has a large template re-issuance ability and the equal error rate can achieve 0.02%. The performance of the palmprint template protection scheme proves the good practicability and security of the proposed stream cipher.

  19. A multimodal biometric authentication system based on 2D and 3D palmprint features

    NASA Astrophysics Data System (ADS)

    Aggithaya, Vivek K.; Zhang, David; Luo, Nan

    2008-03-01

    This paper presents a new personal authentication system that simultaneously exploits 2D and 3D palmprint features. Here, we aim to improve the accuracy and robustness of existing palmprint authentication systems using 3D palmprint features. The proposed system uses an active stereo technique, structured light, to capture 3D image or range data of the palm and a registered intensity image simultaneously. The surface curvature based method is employed to extract features from 3D palmprint and Gabor feature based competitive coding scheme is used for 2D representation. We individually analyze these representations and attempt to combine them with score level fusion technique. Our experiments on a database of 108 subjects achieve significant improvement in performance (Equal Error Rate) with the integration of 3D features as compared to the case when 2D palmprint features alone are employed.

  20. Performance analysis of fiber-based free-space optical communications with coherent detection spatial diversity.

    PubMed

    Li, Kangning; Ma, Jing; Tan, Liying; Yu, Siyuan; Zhai, Chao

    2016-06-10

    The performances of fiber-based free-space optical (FSO) communications over gamma-gamma distributed turbulence are studied for multiple aperture receiver systems. The equal gain combining (EGC) technique is considered as a practical scheme to mitigate the atmospheric turbulence. Bit error rate (BER) performances for binary-phase-shift-keying-modulated coherent detection fiber-based free-space optical communications are derived and analyzed for EGC diversity receptions through an approximation method. To show the net diversity gain of a multiple aperture receiver system, BER performances of EGC are compared with a single monolithic aperture receiver system with the same total aperture area (same average total incident optical power on the aperture surface) for fiber-based free-space optical communications. The analytical results are verified by Monte Carlo simulations. System performances are also compared for EGC diversity coherent FSO communications with or without considering fiber-coupling efficiencies.

  1. Amplify-and-forward cooperative diversity for green UWB-based WBSNs.

    PubMed

    Shaban, Heba; Abou El-Nasr, Mohamad

    2013-01-01

    This paper proposes a novel green cooperative diversity technique based on suboptimal template-based ultra-wideband (UWB) wireless body sensor networks (WBSNs) using amplify-and-forward (AF) relays. In addition, it analyzes the bit-error-rate (BER) performance of the proposed nodes. The analysis is based on the moment-generating function (MGF) of the total signal-to-noise ratio (SNR) at the destination. It also provides an approximate value for the total SNR. The analysis studies the performance of equally correlated binary pulse position modulation (EC-BPPM) assuming the sinusoidal and square suboptimal template pulses. Numerical results are provided for the performance evaluation of optimal and suboptimal template-based nodes with and without relay cooperation. Results show that one relay node provides ~23 dB performance enhancement at 1e - 3 BER, which mitigates the effect of the nondesirable non-line-of-sight (NLOS) links in WBSNs.

  2. Amplify-and-Forward Cooperative Diversity for Green UWB-Based WBSNs

    PubMed Central

    2013-01-01

    This paper proposes a novel green cooperative diversity technique based on suboptimal template-based ultra-wideband (UWB) wireless body sensor networks (WBSNs) using amplify-and-forward (AF) relays. In addition, it analyzes the bit-error-rate (BER) performance of the proposed nodes. The analysis is based on the moment-generating function (MGF) of the total signal-to-noise ratio (SNR) at the destination. It also provides an approximate value for the total SNR. The analysis studies the performance of equally correlated binary pulse position modulation (EC-BPPM) assuming the sinusoidal and square suboptimal template pulses. Numerical results are provided for the performance evaluation of optimal and suboptimal template-based nodes with and without relay cooperation. Results show that one relay node provides ~23 dB performance enhancement at 1e − 3 BER, which mitigates the effect of the nondesirable non-line-of-sight (NLOS) links in WBSNs. PMID:24307880

  3. Offline Signature Verification Using the Discrete Radon Transform and a Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Coetzer, J.; Herbst, B. M.; du Preez, J. A.

    2004-12-01

    We developed a system that automatically authenticates offline handwritten signatures using the discrete Radon transform (DRT) and a hidden Markov model (HMM). Given the robustness of our algorithm and the fact that only global features are considered, satisfactory results are obtained. Using a database of 924 signatures from 22 writers, our system achieves an equal error rate (EER) of 18% when only high-quality forgeries (skilled forgeries) are considered and an EER of 4.5% in the case of only casual forgeries. These signatures were originally captured offline. Using another database of 4800 signatures from 51 writers, our system achieves an EER of 12.2% when only skilled forgeries are considered. These signatures were originally captured online and then digitally converted into static signature images. These results compare well with the results of other algorithms that consider only global features.

  4. VAMPnets for deep learning of molecular kinetics.

    PubMed

    Mardt, Andreas; Pasquali, Luca; Wu, Hao; Noé, Frank

    2018-01-02

    There is an increasing demand for computing the relevant structures, equilibria, and long-timescale kinetics of biomolecular processes, such as protein-drug binding, from high-throughput molecular dynamics simulations. Current methods employ transformation of simulated coordinates into structural features, dimension reduction, clustering the dimension-reduced data, and estimation of a Markov state model or related model of the interconversion rates between molecular structures. This handcrafted approach demands a substantial amount of modeling expertise, as poor decisions at any step will lead to large modeling errors. Here we employ the variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states, thus combining the whole data processing pipeline in a single end-to-end framework. Our method performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.

  5. Hand biometric recognition based on fused hand geometry and vascular patterns.

    PubMed

    Park, GiTae; Kim, Soowon

    2013-02-28

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.

  6. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns

    PubMed Central

    Park, GiTae; Kim, Soowon

    2013-01-01

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119

  7. Effect of atmospheric turbulence on the bit error probability of a space to ground near infrared laser communications link using binary pulse position modulation and an avalanche photodiode detector

    NASA Technical Reports Server (NTRS)

    Safren, H. G.

    1987-01-01

    The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.

  8. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  9. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  10. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, Hong Yi; Milne, Alice; Webster, Richard

    2016-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σK2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR = MSE/σK2 ≈ 1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (ECa) of the topsoil was measured at 525 points in a field of 2.3 ha. The marginal distribution of the observations was strongly positively skewed, and so the observed ECas were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  11. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, HongYi; Milne, Alice; Webster, Richard

    2015-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σ_K^2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR=MSE/ σ_K2 ≈1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (EC_a) of the topsoil was measured at 525 points in a field of 2.3~ha. The marginal distribution of the observations was strongly positively skewed, and so the observed EC_as were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  12. Automatic latency equalization in VHDL-implemented complex pipelined systems

    NASA Astrophysics Data System (ADS)

    Zabołotny, Wojciech M.

    2016-09-01

    In the pipelined data processing systems it is very important to ensure that parallel paths delay data by the same number of clock cycles. If that condition is not met, the processing blocks receive data not properly aligned in time and produce incorrect results. Manual equalization of latencies is a tedious and error-prone work. This paper presents an automatic method of latency equalization in systems described in VHDL. The proposed method uses simulation to measure latencies and verify introduced correction. The solution is portable between different simulation and synthesis tools. The method does not increase the complexity of the synthesized design comparing to the solution based on manual latency adjustment. The example implementation of the proposed methodology together with a simple design demonstrating its use is available as an open source project under BSD license.

  13. A minimax technique for time-domain design of preset digital equalizers using linear programming

    NASA Technical Reports Server (NTRS)

    Vaughn, G. L.; Houts, R. C.

    1975-01-01

    A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.

  14. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  15. A prospective audit of a nurse independent prescribing within critical care.

    PubMed

    Carberry, Martin; Connelly, Sarah; Murphy, Jennifer

    2013-05-01

    To determine the prescribing activity of different staff groups within intensive care unit (ICU) and combined high dependency unit (HDU), namely trainee and consultant medical staff and advanced nurse practitioners in critical care (ANPCC); to determine the number and type of prescription errors; to compare error rates between prescribing groups and to raise awareness of prescribing activity within critical care. The introduction of government legislation has led to the development of non-medical prescribing roles in acute care. This has facilitated an opportunity for the ANPCC working in critical care to develop a prescribing role. The audit was performed over 7 days (Monday-Sunday), on rolling days over a 7-week period in September and October 2011 in three ICUs. All drug entries made on the ICU prescription by the three groups, trainee medical staff, ANPCCs and consultant anaesthetists, were audited once for errors. Data were collected by reviewing all drug entries for errors namely, patient data, drug dose, concentration, rate and frequency, legibility and prescriber signature. A paper data collection tool was used initially; data was later entered onto a Microsoft Access data base. A total of 1418 drug entries were audited from 77 patient prescription Cardexes. Error rates were reported as, 40 errors in 1418 prescriptions (2·8%): ANPCC errors, n = 2 in 388 prescriptions (0·6%); trainee medical staff errors, n = 33 in 984 (3·4%); consultant errors, n = 5 in 73 (6·8%). The error rates were significantly different for different prescribing groups (p < 0·01). This audit shows that prescribing error rates were low (2·8%). Having the lowest error rate, the nurse practitioners are at least as effective as other prescribing groups within this audit, in terms of errors only, in prescribing diligence. National data is required in order to benchmark independent nurse prescribing practice in critical care. These findings could be used to inform research and role development within the critical care. © 2012 The Authors. Nursing in Critical Care © 2012 British Association of Critical Care Nurses.

  16. 5 CFR 531.406 - Creditable service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... pay is equal to or greater than the rate of basic pay for step 4 of the applicable grade and less than... period for an employee whose rate of basic pay is equal to or greater than the rate of basic pay for step....406 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER THE...

  17. Effects of stimulus order on discrimination processes in comparative and equality judgements: data and models.

    PubMed

    Dyjas, Oliver; Ulrich, Rolf

    2014-01-01

    In typical discrimination experiments, participants are presented with a constant standard and a variable comparison stimulus and their task is to judge which of these two stimuli is larger (comparative judgement). In these experiments, discrimination sensitivity depends on the temporal order of these stimuli (Type B effect) and is usually higher when the standard precedes rather than follows the comparison. Here, we outline how two models of stimulus discrimination can account for the Type B effect, namely the weighted difference model (or basic Sensation Weighting model) and the Internal Reference Model. For both models, the predicted psychometric functions for comparative judgements as well as for equality judgements, in which participants indicate whether they perceived the two stimuli to be equal or not equal, are derived and it is shown that the models also predict a Type B effect for equality judgements. In the empirical part, the models' predictions are evaluated. To this end, participants performed a duration discrimination task with comparative judgements and with equality judgements. In line with the models' predictions, a Type B effect was observed for both judgement types. In addition, a time-order error, as indicated by shifts of the psychometric functions, and differences in response times were observed only for the equality judgement. Since both models entail distinct additional predictions, it seems worthwhile for future research to unite the two models into one conceptual framework.

  18. Evaluating the accuracy of sampling to estimate central line-days: simplification of the National Healthcare Safety Network surveillance methods.

    PubMed

    Thompson, Nicola D; Edwards, Jonathan R; Bamberg, Wendy; Beldavs, Zintars G; Dumyati, Ghinwa; Godine, Deborah; Maloney, Meghan; Kainer, Marion; Ray, Susan; Thompson, Deborah; Wilson, Lucy; Magill, Shelley S

    2013-03-01

    To evaluate the accuracy of weekly sampling of central line-associated bloodstream infection (CLABSI) denominator data to estimate central line-days (CLDs). Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6-12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN). Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location's CLABSI ranking changed by more than 2 deciles. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.

  19. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  20. Effects of hair, clothing, and headgear on localization of three-dimensional sounds Part IIb

    NASA Astrophysics Data System (ADS)

    Riederer, Klaus A. J.

    2003-10-01

    Seven 20-25-year-old normal hearing (<=20 dBHL) native male-undergraduates listened twice to treatments of 85 virtual source locations in a large dark anechoic chamber. The 3-D-stimuli were anew-calculated white noise bursts, amplitude modulated (40-Hz sine), repeated after a pause (total duration 3×275=825 ms), HRTF-convolved and headphone-equalized (Sennheiser HD580). The HRTFs were measured from a Cortex dummy head wearing different garments: 1=alpaca pullover only; 2=1+curly pony-tailed thick-hair+eye-glasses 3=1+long thin-hair (ear-covering) 4=1+mens trilby; 5=2+bicycle helmet+jacket [Riederer, J. Acoust. Soc. Am., this issue]. Perceived directions were signified by placing a tailored digitizer-stylus over an illuminated ball darkened after the response. Subjects did the experiments during three days, each consisting of a 2-h session of several randomized sets with multiple breaks. Azimuth and elevation errors were investigated separately in factorial within-subjects ANOVA showing strong dependence p(<=0.004) on all main effects and interactions (garment, elevation, azimuth). The grand mean errors were approximately 16°-19°. Confused angles were retained around the +/-90°-interaural axis and cos(elev)-weighting was applied to azimuth errors. The total front-back/back-front confusion rate was 18.38% and up-down/down-up 12.21%. The confusions (except left-right/right-left, 2.07%) and reaction times depended strongly on azimuth (main effect) and garment (interaction). [Work supported by Graduate School of Electronics, Telecommunication and Automation.

Top