NASA Astrophysics Data System (ADS)
Zhang, Kuiyuan; Umehara, Shigehiro; Yamaguchi, Junki; Furuta, Jun; Kobayashi, Kazutoshi
2016-08-01
This paper analyzes how body bias and BOX region thickness affect soft error rates in 65-nm SOTB (Silicon on Thin BOX) and 28-nm UTBB (Ultra Thin Body and BOX) FD-SOI processes. Soft errors are induced by alpha-particle and neutron irradiation and the results are then analyzed by Monte Carlo based simulation using PHITS-TCAD. The alpha-particle-induced single event upset (SEU) cross-section and neutron-induced soft error rate (SER) obtained by simulation are consistent with measurement results. We clarify that SERs decreased in response to an increase in the BOX thickness for SOTB while SERs in UTBB are independent of BOX thickness. We also discover SOTB develops a higher tolerance to soft errors when reverse body bias is applied while UTBB become more susceptible.
NASA Astrophysics Data System (ADS)
Sharma, Prabhat Kumar
2016-11-01
A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.
Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment
NASA Astrophysics Data System (ADS)
Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.
2016-11-01
This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.
Symbol Error Rate of Underlay Cognitive Relay Systems over Rayleigh Fading Channel
NASA Astrophysics Data System (ADS)
Ho van, Khuong; Bao, Vo Nguyen Quoc
Underlay cognitive systems allow secondary users (SUs) to access the licensed band allocated to primary users (PUs) for better spectrum utilization with the power constraint imposed on SUs such that their operation does not harm the normal communication of PUs. This constraint, which limits the coverage range of SUs, can be offset by relaying techniques that take advantage of shorter range communication for lower path loss. Symbol error rate (SER) analysis of underlay cognitive relay systems over fading channel has not been reported in the literature. This paper fills this gap. The derived SER expressions are validated by simulations and show that underlay cognitive relay systems suffer a high error floor for any modulation level.
Carrier recovery methods for a dual-mode modem: A design approach
NASA Technical Reports Server (NTRS)
Richards, C. W.; Wilson, S. G.
1984-01-01
A dual mode model with selectable QPSK or 16-QASK modulation schemes is discussed. The theoretical reasoning as well as the practical trade-offs made during the development of a modem are presented, with attention given to the carrier recovery method used for coherent demodulation. Particular attention is given to carrier recovery methods that can provide little degradation due to phase error for both QPSK and 16-QASK, while being insensitive to the amplitude characteristic of a 16-QASK modulation scheme. A computer analysis of the degradation is symbol error rate (SER) for QPSK and 16-QASK due to phase error is prresented. Results find that an energy increase of roughly 4 dB is needed to maintain a SER of 1X10(-5) for QPSK with 20 deg of phase error and 16-QASK with 7 deg phase error.
Decoding algorithm for vortex communications receiver
NASA Astrophysics Data System (ADS)
Kupferman, Judy; Arnon, Shlomi
2018-01-01
Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.
Neutron beam irradiation study of workload dependence of SER in a microprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalak, Sarah E; Graves, Todd L; Hong, Ted
It is known that workloads are an important factor in soft error rates (SER), but it is proving difficult to find differentiating workloads for microprocessors. We have performed neutron beam irradiation studies of a commercial microprocessor under a wide variety of workload conditions from idle, performing no operations, to very busy workloads resembling real HPC, graphics, and business applications. There is evidence that the mean times to first indication of failure, MTFIF defined in Section II, may be different for some of the applications.
Adaptively loaded IM/DD optical OFDM based on set-partitioned QAM formats.
Zhao, Jian; Chen, Lian-Kuan
2017-04-17
We investigate the constellation design and symbol error rate (SER) of set-partitioned (SP) quadrature amplitude modulation (QAM) formats. Based on the SER analysis, we derive the adaptive bit and power loading algorithm for SP QAM based intensity-modulation direct-detection (IM/DD) orthogonal frequency division multiplexing (OFDM). We experimentally show that the proposed system significantly outperforms the conventional adaptively-loaded IM/DD OFDM and can increase the data rate from 36 Gbit/s to 42 Gbit/s in the presence of severe dispersion-induced spectral nulls after 40-km single-mode fiber. It is also shown that the adaptive algorithm greatly enhances the tolerance to fiber nonlinearity and allows for more power budget.
NASA Astrophysics Data System (ADS)
Fulkerson, David E.
2010-02-01
This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.
Laser as a Tool to Study Radiation Effects in CMOS
NASA Astrophysics Data System (ADS)
Ajdari, Bahar
Energetic particles from cosmic ray or terrestrial sources can strike sensitive areas of CMOS devices and cause soft errors. Understanding the effects of such interactions is crucial as the device technology advances, and chip reliability has become more important than ever. Particle accelerator testing has been the standard method to characterize the sensitivity of chips to single event upsets (SEUs). However, because of their costs and availability limitations, other techniques have been explored. Pulsed laser has been a successful tool for characterization of SEU behavior, but to this day, laser has not been recognized as a comparable method to beam testing. In this thesis, I propose a methodology of correlating laser soft error rate (SER) to particle beam gathered data. Additionally, results are presented showing a temperature dependence of SER and the "neighbor effect" phenomenon where due to the close proximity of devices a "weakening effect" in the ON state can be observed.
Nakamura, Moriya; Kamio, Yukiyoshi; Miyazaki, Tetsuya
2008-07-07
We experimentally demonstrated linewidth-tolerant 10-Gbit/s (2.5-Gsymbol/s) 16-quadrature amplitude modulation (QAM) by using a distributed-feedback laser diode (DFB-LD) with a linewidth of 30 MHz. Error-free operation, a bit-error rate (BER) of <10(-9) was achieved in transmission over 120 km of standard single mode fiber (SSMF) without any dispersion compensation. The phase-noise canceling capability provided by a pilot-carrier and standard electronic pre-equalization to suppress inter-symbol interference (ISI) gave clear 16-QAM constellations and floor-less BER characteristics. We evaluated the BER characteristics by real-time measurement of six (three different thresholds for each I- and Q-component) symbol error rates (SERs) with simultaneous constellation observation.
Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.
Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay
2015-12-01
In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.
Performance analysis of optimal power allocation in wireless cooperative communication systems
NASA Astrophysics Data System (ADS)
Babikir Adam, Edriss E.; Samb, Doudou; Yu, Li
2013-03-01
Cooperative communication has been recently proposed in wireless communication systems for exploring the inherent spatial diversity in relay channels.The Amplify-and-Forward (AF) cooperation protocols with multiple relays have not been sufficiently investigated even if it has a low complexity in term of implementation. We consider in this work a cooperative diversity system in which a source transmits some information to a destination with the help of multiple relay nodes with AF protocols and investigate the optimality of allocating powers both at the source and the relays system by optimizing the symbol error rate (SER) performance in an efficient way. Firstly we derive a closedform SER formulation for MPSK signal using the concept of moment generating function and some statistical approximations in high signal to noise ratio (SNR) for the system under studied. We then find a tight corresponding lower bound which converges to the same limit as the theoretical upper bound and develop an optimal power allocation (OPA) technique with mean channel gains to minimize the SER. Simulation results show that our scheme outperforms the equal power allocation (EPA) scheme and is tight to the theoretical approximation based on the SER upper bound in high SNR for different number of relays.
Healey, Natasha; McLoone, Eibhlin; Mahon, Gerald; Jackson, A Jonathan; Saunders, Kathryn J; McClelland, Julie F
2013-04-26
We explored associations between refractive error and foveal hypoplasia in infantile nystagmus syndrome (INS). We recruited 50 participants with INS (albinism n = 33, nonalbinism infantile nystagmus [NAIN] n = 17) aged 4 to 48 years. Cycloplegic refractive error and logMAR acuity were obtained. Spherical equivalent (SER), most ametropic meridian (MAM) refractive error, and better eye acuity (VA) were used for analyses. High resolution spectral-domain optical coherence tomography (SD-OCT) was used to obtain foveal scans, which were graded using the Foveal Hypoplasia Grading Scale. Associations between grades of severity of foveal hypoplasia, and refractive error and VA were explored. Participants with more severe foveal hypoplasia had significantly higher MAMs and SERs (Kruskal-Wallis H test P = 0.005 and P = 0.008, respectively). There were no statistically significant associations between foveal hypoplasia and cylindrical refractive error (Kruskal-Wallis H test P = 0.144). Analyses demonstrated significant differences between participants with albinism or NAIN in terms of SER and MAM (Mann-Whitney U test P = 0.001). There were no statistically significant differences between astigmatic errors between participants with albinism and NAIN. Controlling for the effects of albinism, results demonstrated no significant associations between SER, and MAM and foveal hypoplasia (partial correlation P > 0.05). Poorer visual acuity was associated statistically significantly with more severe foveal hypoplasia (Kruskal-Wallis H test P = 0.001) and with a diagnosis of albinism (Mann-Whitney U test P = 0.001). Increasing severity of foveal hypoplasia is associated with poorer VA, reflecting reduced cone density in INS. Individuals with INS also demonstrate a significant association between more severe foveal hypoplasia and increasing hyperopia. However, in the absence of albinism, there is no significant relation between refractive outcome and degree of foveal hypoplasia, suggesting that foveal maldevelopment in isolation does not impair significantly the emmetropization process. It likely is that impaired emmetropization evidenced in the albinism group may be attributed to the whole eye effect of albinism.
Changes in refractive errors related to spectacle correction of hyperopia.
Yang, Hee Kyung; Choi, Jung Yeon; Kim, Dae Hyun; Hwang, Jeong-Min
2014-01-01
Hyperopic undercorrection is a common clinical practice. However, less is known of its effect on the change in refractive errors and emmetropization throughout the later years of childhood. To evaluate the effect of spectacle correction on the change in refractive errors in hyperopic children less than 12 years of age with or without strabismus. A retrospective cohort study was performed by a computer based search of the hospital database of patients with hyperopia, accommodative esotropia or exotropia. A total of 150 hyperopic children under 12 years of age were included. Patients were classified into four groups: 1) accommodative esotropia with full correction of hyperopia, 2) exotropia with undercorrection of hyperopia, 3) orthotropia with full correction of hyperopia, 4) orthotropia with undercorrection of hyperopia. The 4 groups were matched by initial age on examination and spherical equivalent refractive errors (SER). The main outcome measure was the change in SER (Diopter/year) in both eyes after two years of follow-up. An overall negative shift in SER was noted during the follow-up period in all groups, except for the group with esotropia and full correction. The mean negative shift of hyperopia was more rapid in groups receiving undercorrection of hyperopia with or without strabismus. The amount of undercorrection of hyperopia was positively correlated to the magnitude of decrease in hyperopia in all patients (r = 0.289, P<0.001) and in the subgroup of patients with orthotropia (r = 0.304, P = 0.011). The amount of undercorrection of hyperopia was the only factor associated with a more negative shift in SER (OR, 2.414; 95% CI, 1.202-4.849; P = 0.013). The amount of undercorrection is significantly correlated to the change in hyperopic refractive errors. Full correction of hyperopia may inhibit emmetropization during early and late childhood.
Effects of Stopping Ions and LET Fluctuations on Soft Error Rate Prediction.
Weeden-Wright, S. L.; King, Michael Patrick; Hooten, N. C.; ...
2015-02-01
Variability in energy deposition from stopping ions and LET fluctuations is quantified for specific radiation environments. When compared to predictions using average LET via CREME96, LET fluctuations lead to an order-of-magnitude difference in effective flux and a nearly 4x decrease in predicted soft error rate (SER) in an example calculation performed on a commercial 65 nm SRAM. The large LET fluctuations reported here will be even greater for the smaller sensitive volumes that are characteristic of highly scaled technologies. End-of-range effects of stopping ions do not lead to significant inaccuracies in radiation environments with low solar activity unless the sensitivevolumemore » thickness is 100 μm or greater. In contrast, end-of-range effects for stopping ions lead to significant inaccuracies for sensitive- volume thicknesses less than 10 μm in radiation environments with high solar activity.« less
NASA Astrophysics Data System (ADS)
Khallaf, Haitham S.; Garrido-Balsells, José M.; Shalaby, Hossam M. H.; Sampei, Seiichi
2015-12-01
The performance of multiple-input multiple-output free space optical (MIMO-FSO) communication systems, that adopt multipulse pulse position modulation (MPPM) techniques, is analyzed. Both exact and approximate symbol-error rates (SERs) are derived for both cases of uncorrelated and correlated channels. The effects of background noise, receiver shot-noise, and atmospheric turbulence are taken into consideration in our analysis. The random fluctuations of the received optical irradiance, produced by the atmospheric turbulence, is modeled by the widely used gamma-gamma statistical distribution. Uncorrelated MIMO channels are modeled by the α-μ distribution. A closed-form expression for the probability density function of the optical received irradiance is derived for the case of correlated MIMO channels. Using our analytical expressions, the degradation of the system performance with the increment of the correlation coefficients between MIMO channels is corroborated.
Sørensen, Klavs M; Westley, Chloe; Goodacre, Royston; Engelsen, Søren Balling
2015-10-01
This study investigates the feasibility of using surface-enhanced Raman scattering (SERS) for the quantification of absolute levels of the boar-taint compounds skatole and androstenone in porcine fat. By investigation of different types of nanoparticles, pH and aggregating agents, an optimized environment that promotes SERS of the analytes was developed and tested with different multivariate spectral pre-processing techniques, and this was combined with variable selection on a series of analytical standards. The resulting method exhibited prediction errors (root mean square error of cross validation, RMSECV) of 2.4 × 10(-6) M skatole and 1.2 × 10(-7) M androstenone, with a limit of detection corresponding to approximately 2.1 × 10(-11) M for skatole and approximately 1.8 × 10(-10) for androstenone. The method was subsequently tested on porcine fat extract, leading to prediction errors (RMSECV) of 0.17 μg/g for skatole and 1.5 μg/g for androstenone. It is clear that this optimized SERS method, when combined with multivariate analysis, shows great potential for optimization into an on-line application, which will be the first of its kind, and opens up possibilities for simultaneous detection of other meat-quality metabolites or pathogen markers. Graphical abstract Artistic rendering of a laser-illuminated gold colloid sphere with skatole and androstenone adsorbed on the surface.
Superhydrophobic Ag nanostructures on polyaniline membranes with strong SERS enhancement.
Liu, Weiyu; Miao, Peng; Xiong, Lu; Du, Yunchen; Han, Xijiang; Xu, Ping
2014-11-07
We demonstrate here a facile fabrication of n-dodecyl mercaptan-modified superhydrophobic Ag nanostructures on polyaniline membranes for molecular detection based on SERS technique, which combines the superhydrophobic condensation effect and the high enhancement factor. It is calculated that the as-fabricated superhydrophobic substrate can exhibit a 21-fold stronger molecular condensation, and thus further amplifies the SERS signal to achieve more sensitive detection. The detection limit of the target molecule, methylene blue (MB), on this superhydrophobic substrate can be 1 order of magnitude higher than that on the hydrophilic substrate. With high reproducibility, the feasibility of using this SERS-active superhydrophobic substrate for quantitative molecular detection is explored. A partial least squares (PLS) model was established for the quantification of MB by SERS, with correlation coefficient R(2) = 95.1% and root-mean-squared error of prediction (RMSEP) = 0.226. We believe this superhydrophobic SERS substrate can be widely used in trace analysis due to its facile fabrication, high signal reproducibility and promising SERS performance.
Itoi, Fumiaki; Asano, Yukiko; Shimizu, Masashi; Nagai, Rika; Saitou, Kanako; Honnma, Hiroyuki; Murata, Yasutaka
2017-04-01
In this study the clinical and neo-natal outcomes after transfer of blastocysts derived from oocytes containing aggregates of smooth endoplasmic reticulum (SER) were compared between IVF and intracytoplasmic sperm injection (ICSI) cycles. Clinical and neo-natal outcomes of blastocysts in cycles with at least one SER metaphase II oocyte (SER + MII; SER + cycles) did not significantly differ between the two insemination methods. When SER + MII were cultured to day 5/6, fertilization, embryo cleavage and blastocyst rates were not significantly different between IVF and ICSI cycles. In vitrified-warmed blastocyst transfer cycles, the clinical pregnancy rates from SER + MII in IVF and ICSI did not significantly differ. In this study, 52 blastocysts (27 IVF and 25 ICSI) derived from SER + MII were transferred, yielding 15 newborns (5 IVF and 10 ICSI) and no malformations. Moreover, 300 blastocysts (175 IVF and 125 ICSI) derived from SER-MII were transferred, yielding 55 newborns (24 IVF and 31 ICSI cycles). Thus, blastocysts derived from SER + cycles exhibited an acceptable ongoing pregnancy rate after IVF (n = 125) or ICSI (n = 117) cycles. In conclusion, blastocysts from SER + MII in both IVF and ICSI cycles yield adequate ongoing pregnancy rates with neo-natal outcomes that do not differ from SER-MII. Copyright © 2017 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Monte Carlo simulation of particle-induced bit upsets
NASA Astrophysics Data System (ADS)
Wrobel, Frédéric; Touboul, Antoine; Vaillé, Jean-Roch; Boch, Jérôme; Saigné, Frédéric
2017-09-01
We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER) for a given device in a given environment.
SERS quantitative urine creatinine measurement of human subject
NASA Astrophysics Data System (ADS)
Wang, Tsuei Lian; Chiang, Hui-hua K.; Lu, Hui-hsin; Hung, Yung-da
2005-03-01
SERS method for biomolecular analysis has several potentials and advantages over traditional biochemical approaches, including less specimen contact, non-destructive to specimen, and multiple components analysis. Urine is an easily available body fluid for monitoring the metabolites and renal function of human body. We developed surface-enhanced Raman scattering (SERS) technique using 50nm size gold colloidal particles for quantitative human urine creatinine measurements. This paper shows that SERS shifts of creatinine (104mg/dl) in artificial urine is from 1400cm-1 to 1500cm-1 which was analyzed for quantitative creatinine measurement. Ten human urine samples were obtained from ten healthy persons and analyzed by the SERS technique. Partial least square cross-validation (PLSCV) method was utilized to obtain the estimated creatinine concentration in clinically relevant (55.9mg/dl to 208mg/dl) concentration range. The root-mean square error of cross validation (RMSECV) is 26.1mg/dl. This research demonstrates the feasibility of using SERS for human subject urine creatinine detection, and establishes the SERS platform technique for bodily fluids measurement.
Kwon, Tae-Ho; Kim, Jai-Eun; Kim, Ki-Doo
2018-05-14
In the field of communication, synchronization is always an important issue. The communication between a light-emitting diode (LED) array (LEA) and a camera is known as visual multiple-input multiple-output (MIMO), for which the data transmitter and receiver must be synchronized for seamless communication. In visual-MIMO, LEDs generally have a faster data rate than the camera. Hence, we propose an effective time-sharing-based synchronization technique with its color-independent characteristics providing the key to overcome this synchronization problem in visual-MIMO communication. We also evaluated the performance of our synchronization technique by varying the distance between the LEA and camera. A graphical analysis is also presented to compare the symbol error rate (SER) at different distances.
"Ser" and "Estar": Corrective Input to Children's Errors of the Spanish Copula Verbs
ERIC Educational Resources Information Center
Holtheuer, Carolina; Rendle-Short, Johanna
2013-01-01
Evidence for the role of corrective input as a facilitator of language acquisition is inconclusive. Studies show links between corrective input and grammatical use of some, but not other, language structures. The present study examined relationships between corrective parental input and children's errors in the acquisition of the Spanish copula…
Layer-by-Layer Polyelectrolyte Encapsulation of Mycoplasma pneumoniae for Enhanced Raman Detection
Rivera-Betancourt, Omar E.; Sheppard, Edward S.; Krause, Duncan C.; Dluhy, Richard A.
2014-01-01
Mycoplasma pneumoniae is a major cause of respiratory disease in humans and accounts for as much as 20% of all community-acquired pneumonia. Existing mycoplasma diagnosis is primarily limited by the poor success rate at culturing the bacteria from clinical samples. There is a critical need to develop a new platform for mycoplasma detection that has high sensitivity, specificity, and expediency. Here we report the layer-by-layer (LBL) encapsulation of M. pneumoniae cells with Ag nanoparticles in a matrix of the polyelectrolytes poly(allylamine hydrochloride) (PAH) and poly(styrene sulfonate) (PSS). We evaluated nanoparticle encapsulated mycoplasma cells as a platform for the differentiation of M. pneumoniae strains using surface enhanced Raman scattering (SERS) combined with multivariate statistical analysis. Three separate M. pneumoniae strains (M129, FH and II-3) were studied. Scanning electron microscopy and fluorescence imaging showed that the Ag nanoparticles were incorporated between the oppositely charged polyelectrolyte layers. SERS spectra showed that LBL encapsulation provides excellent spectral reproducibility. Multivariate statistical analysis of the Raman spectra differentiated the three M. pneumoniae strains with 97 – 100% specificity and sensitivity, and low (0.1 – 0.4) root mean square error. These results indicated that nanoparticle and polyelectrolyte encapsulation of M. pneumoniae is a potentially powerful platform for rapid and sensitive SERS-based bacterial identification. PMID:25017005
Blocking Losses With a Photon Counter
NASA Technical Reports Server (NTRS)
Moision, Burce E.; Piazzolla, Sabino
2012-01-01
It was not known how to assess accurately losses in a communications link due to photodetector blocking, a phenomenon wherein a detector is rendered inactive for a short time after the detection of a photon. When used to detect a communications signal, blocking leads to losses relative to an ideal detector, which may be measured as a reduction in the communications rate for a given received signal power, or an increase in the signal power required to support the same communications rate. This work involved characterizing blocking losses for single detectors and arrays of detectors. Blocking may be mitigated by spreading the signal intensity over an array of detectors, reducing the count rate on any one detector. A simple approximation was made to the blocking loss as a function of the probability that a detector is unblocked at a given time, essentially treating the blocking probability as a scaling of the detection efficiency. An exact statistical characterization was derived for a single detector, and an approximation for multiple detectors. This allowed derivation of several accurate approximations to the loss. Methods were also derived to account for a rise time in recovery, and non-uniform illumination due to diffraction and atmospheric distortion of the phase front. It was assumed that the communications signal is intensity modulated and received by an array of photon-counting photodetectors. For the purpose of this analysis, it was assumed that the detectors are ideal, in that they produce a signal that allows one to reproduce the arrival times of electrons, produced either as photoelectrons or from dark noise, exactly. For single detectors, the performance of the maximum-likelihood (ML) receiver in blocking is illustrated, as well as a maximum-count (MC) receiver, that, when receiving a pulse-position-modulated (PPM) signal, selects the symbol corresponding to the slot with the largest electron count. Whereas the MC receiver saturates at high count rates, the ML receiver may not. The loss in capacity, symbol-error-rate (SER), and count-rate were numerically computed. It was shown that the capacity and symbol-error-rate losses track, whereas the count-rate loss does not generally reflect the SER or capacity loss, as the slot-statistics at the detector output are no longer Poisson. It is also shown that the MC receiver loss may be accurately predicted for dead times on the order of a slot.
Sign language spotting with a threshold model based on conditional random fields.
Yang, Hee-Deok; Sclaroff, Stan; Lee, Seong-Whan
2009-07-01
Sign language spotting is the task of detecting and recognizing signs in a signed utterance, in a set vocabulary. The difficulty of sign language spotting is that instances of signs vary in both motion and appearance. Moreover, signs appear within a continuous gesture stream, interspersed with transitional movements between signs in a vocabulary and nonsign patterns (which include out-of-vocabulary signs, epentheses, and other movements that do not correspond to signs). In this paper, a novel method for designing threshold models in a conditional random field (CRF) model is proposed which performs an adaptive threshold for distinguishing between signs in a vocabulary and nonsign patterns. A short-sign detector, a hand appearance-based sign verification method, and a subsign reasoning method are included to further improve sign language spotting accuracy. Experiments demonstrate that our system can spot signs from continuous data with an 87.0 percent spotting rate and can recognize signs from isolated data with a 93.5 percent recognition rate versus 73.5 percent and 85.4 percent, respectively, for CRFs without a threshold model, short-sign detection, subsign reasoning, and hand appearance-based sign verification. Our system can also achieve a 15.0 percent sign error rate (SER) from continuous data and a 6.4 percent SER from isolated data versus 76.2 percent and 14.5 percent, respectively, for conventional CRFs.
Marmamula, Srinivas; Keeffe, Jill E; Narsaiah, Saggam; Khanna, Rohit C; Rao, Gullapalli N
2014-11-01
Measurements of refractive errors through subjective or automated refraction are not always possible in rapid assessment studies and community vision screening programs; however, measurements of vision with habitual correction and with a pinhole can easily be made. Although improvements in vision with a pinhole are assumed to mean that a refractive error is present, no studies have investigated the magnitude of improvement in vision with pinhole that is predictive of refractive error. The aim was to measure the sensitivity and specificity of 'vision improvement with pinhole' in predicting the presence of refractive error in a community setting. Vision and vision with pinhole were measured using a logMAR chart for 488 of 582 individuals aged 15 to 50 years. Refractive errors were measured using non-cycloplegic autorefraction and subjective refraction. The presence of refractive error was defined using spherical equivalent refraction (SER) at two levels: SER greater than ± 0.50 D sphere (DS) and SER greater than ±1.00 DS. Three definitions for significant improvement in vision with a pinhole were used: 1. Presenting vision less than 6/12 and improving to 6/12 or better, 2. Improvement in vision of more than one logMAR line and 3. Improvement in vision of more than two logMAR lines. For refractive error defined as spherical equivalent refraction greater than ± 0.50 DS, the sensitivities and specificities for the pinhole test predicting the presence of refractive error were 83.9 per cent (95% CI: 74.5 to 90.9) and 98.8 per cent (95% CI: 97.1 to 99.6), respectively for definition 1. Definition 2 had a sensitivity 89.7 per cent (95% CI: 81.3 to 95.2) and specificity 88.0 per cent (95% CI: 4.4 to 91.0). Definition 3 had a sensitivity of 75.9 per cent (95% CI: 65.5 to 84.4) and specificity of 97.8 per cent (95% CI: 95.8 to 99.0). Similar results were found with spherical equivalent refraction greater than ±1.00 DS, when tested against the three pinhole-based definitions. Refractive error definitions based on improvement in vision with the pinhole shows good sensitivity and specificity at predicting the presence of significant refractive errors. These definitions can be used in rapid assessment surveys and community-based vision screenings. © 2014 The Authors. Clinical and Experimental Optometry © 2014 Optometrists Association Australia.
On the Effects of a Spacecraft Subcarrier Unbalanced Modulator
NASA Technical Reports Server (NTRS)
Nguyen, Tien Manh
1993-01-01
This paper presents mathematical models with associated analysis of the deleterious effects which a spacecraft's subcarrier unbalanced modulator has on the performance of a phase-modulated residual carrier communications link. The undesired spectral components produced by the phase and amplitude imbalances in the subcarrier modulator can cause (1) potential interference to the carrier tracking and (2) degradation in the telemetry bit signal-to-noise ratio (SNR). A suitable model for the unbalanced modulator is developed and the threshold levels of undesired components that fall into the carrier tracking loop are determined. The distribution of the carrier phase error caused by the additive White Gaussian noise (AWGN) and undesired component at the residual RF carrier is derived for the limiting cases. Further, this paper analyses the telemetry bit signal-to-noise ratio degradations due to undesirable spectral components as well as the carrier tracking phase error induced by phase and amplitude imbalances. Numerical results which indicate the sensitivity of the carrier tracking loop and the telemetry symbol-error rate (SER) to various parameters of the models are also provided as a tool in the design of the subcarrier balanced modulator.
Indoor visible light communication with smart lighting technology
NASA Astrophysics Data System (ADS)
Das Barman, Abhirup; Halder, Alak
2017-02-01
An indoor visible-light communication performance is investigated utilizing energy efficient white light by 2D LED arrays. Enabled by recent advances in LED technology, IEEE 802.15.7 standardizes high-data-rate visible light communication and advocates for colour shift keying (CSK) modulation to overcome flicker and to support dimming. Voronoi segmentation is employed for decoding N-CSK constellation which has superior performance compared to other existing decoding methods. The two chief performance degrading effects of inter-symbol interference and LED nonlinearity is jointly mitigated using LMS post equalization at the receiver which improves the symbol error rate performance and increases field of view of the receiver. It is found that LMS post equalization symbol at 250MHz offers 7dB SNR improvement at SER10-6
NASA Astrophysics Data System (ADS)
Mourgias-Alexandris, G.; Moralis-Pegios, M.; Terzenidis, N.; Cherchi, M.; Harjanne, M.; Aalto, T.; Vyrsokinos, K.; Pleros, N.
2018-02-01
The urgent need for high-bandwidth and high-port connectivity in Data Centers has boosted the deployment of optoelectronic packet switches towards bringing high data-rate optics closer to the ASIC, realizing optical transceiver functions directly at the ASIC package for high-rate, low-energy and low-latency interconnects. Even though optics can offer a broad range of low-energy integrated switch fabrics for replacing electronic switches and seamlessly interface with the optical I/Os, the use of energy- and latency-consuming electronic SerDes continues to be a necessity, mainly dictated by the absence of integrated and reliable optical buffering solutions. SerDes undertakes the role of optimally synergizing the lower-speed electronic buffers with the incoming and outgoing optical streams, suggesting that a SerDes-released chip-scale optical switch fabric can be only realized in case all necessary functions including contention resolution and switching can be implemented on a common photonic integration platform. In this paper, we demonstrate experimentally a hybrid Broadcast-and-Select (BS) / wavelength routed optical switch that performs both the optical buffering and switching functions with μm-scale Silicon-integrated building blocks. Optical buffering is carried out in a silicon-integrated variable delay line bank with a record-high on-chip delay/footprint efficiency of 2.6ns/mm2 and up to 17.2 nsec delay capability, while switching is executed via a BS design and a silicon-integrated echelle grating, assisted by SOA-MZI wavelength conversion stages and controlled by a FPGA header processing module. The switch has been experimentally validated in a 3x3 arrangement with 10Gb/s NRZ optical data packets, demonstrating error-free switching operation with a power penalty of <5dB.
NASA Astrophysics Data System (ADS)
Li, Lei; Hu, Jianhao
2010-12-01
Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.
Chattoraj, Asamanja; Seth, Mohua; Maitra, Saumen Kumar
2008-07-01
The influences of serotonin (5-hydroxytryptamine) on the action of melatonin (N-acetyl-5-methoxytryptamine) in MIH (maturation inducing hormone)-induced meiotic resumption were evaluated in the oocytes of carp Catla catla using an in vitro model. Oocytes from gravid female carp were isolated and incubated separately in Medium 199 containing either (a) only melatonin (MEL; 100 pg/mL), or (b) only serotonin (SER; 100 pg/mL), or (c) only MIH (1 microg/mL), or (d) MEL and MIH (e) or MEL (4 h before) and MIH, or (f) MEL and SER, (g) or SER and MIH, or (h) SER (4 h before) and MIH, or (i) luzindole (L-antagonist of MEL receptors; 10 microM) and MEL, or (j) MEL, L and MIH, or (k) MEL (4 h before), L and MIH, or (l) metoclopramide hydrochloride (M-antagonist of SER receptors; 10 microM) and SER, or (m) M, MEL, SER, or (n) M, SER and MIH, or (o) M, SER (4 h before) and MIH, or (p) M, MEL SER and MIH, or (q) MEL, L, SER and M, or (r) MEL, L, SER, M, and MIH, or (s) MEL, SER, L and MIH. Control oocytes were incubated in the medium alone. Oocytes were incubated for 4, or 8, or 12, or 16 h and effects were evaluated by considering the rate (%) of germinal vesicle breakdown (GVBD). At the end of 16 h incubation, 93.24+/-1.57% oocytes underwent GVBD following incubation with only MIH, while incubation with only MEL or only SER resulted in 77.15+/-1.91% or 14.42+/-0.43% GVBD respectively. Interestingly, incubation with MEL 4 h prior to addition of MIH in the medium, led to an accelerated rate of GVBD (92.58+/-1.10% at 12 h). In contrast, SER, irrespective of its time of application in relation to MIH, resulted in a maximum of 64.57+/-0.86% GVBD. While L was found to reduce the stimulatory actions of melatonin, M suppressed the inhibitory actions of serotonin. In each case, both electrophoretic and immunoblot studies revealed that the rate of GVBD was associated with the rate of formation of maturation promoting factor (a complex of two proteins: a regulatory component--cyclin B and the catalytic component--Cdk1 or cdc2). Collectively, the present study reports for the first time that SER not only inhibits the independent actions of MIH, but also the actions of MEL on the MIH-induced oocytes maturation in carp.
LEA Detection and Tracking Method for Color-Independent Visual-MIMO
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-01-01
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563
LEA Detection and Tracking Method for Color-Independent Visual-MIMO.
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-07-02
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.
Far-side geometrical enhancement in surface-enhanced Raman scattering with Ag plasmonic films
NASA Astrophysics Data System (ADS)
Perera, M. Nilusha M. N.; Gibbs, W. E. Keith; Juodkazis, Saulius; Stoddart, Paul R.
2018-01-01
Surface-enhanced Raman scattering (SERS) is a surface sensitive technique where the large increase in scattering has primarily been attributed to electromagnetic and chemical enhancements. While smaller geometrical enhancements due to thin film interference and cavity resonances have also been reported, an additional enhancement in the SERS signal, referred to as the `far-side geometrical enhancement', occurs when the SERS substrate is excited through an underlying transparent dielectric substrate. Here the far-side geometrically-enhanced SERS signal has been explored experimentally in more detail. Thermally evaporated Ag plasmonic films functionalised with thiophenol were used to study the dependence of the geometrically-enhanced SERS signal on the excitation wavelength, supporting substrate material and excitation angle of incidence. The results were interpreted using a `geometrical enhancement factor' (GEF), defined as the ratio of far-side to near-side SERS signal intensity. The experimental results confirmed that the highest GEFs of 3.2-3.5× are seen closer to the localized surface plasmon resonance peak of the Ag metallic nanostructures. Interestingly, the GEF for Ag plasmonic films deposited on glass and sapphire were the same within the measurement errors, whereas increasing angle of incidence showed a decrease in the GEF. Given this improved understanding of the far-side geometrical SERS enhancement, the potential for further signal amplification and optimisation for practical sensing applications can now be considered, especially for SERS detection modes at the farend of optical fibre probes and through process windows.
Zheng, Jinkai; Fang, Xiang; Cao, Yong; Xiao, Hang; He, Lili
2013-01-01
To develop an accurate and convenient method for monitoring the production of citrus-derived bioactive 5-demethylnobiletin from demethylation reaction of nobiletin, we compared surface enhanced Raman spectroscopy (SERS) methods with a conventional HPLC method. Our results show that both the substrate-based and solution-based SERS methods correlated with HPLC method very well. The solution method produced lower root mean square error of calibration and higher correlation coefficient than the substrate method. The solution method utilized an ‘affinity chromatography’-like procedure to separate the reactant nobiletin from the product 5-demthylnobiletin based on their different binding affinity to the silver dendrites. The substrate method was found simpler and faster to collect the SERS ‘fingerprint’ spectra of the samples as no incubation between samples and silver was needed and only trace amount of samples were required. Our results demonstrated that the SERS methods were superior to HPLC method in conveniently and rapidly characterizing and quantifying 5-demethylnobiletin production. PMID:23885986
NASA Astrophysics Data System (ADS)
Zhang, Y. M.; Evans, J. R. G.; Yang, S. F.
2010-11-01
The authors have discovered a systematic, intelligent and potentially automatic method to detect errors in handbooks and stop their transmission using unrecognised relationships between materials properties. The scientific community relies on the veracity of scientific data in handbooks and databases, some of which have a long pedigree covering several decades. Although various outlier-detection procedures are employed to detect and, where appropriate, remove contaminated data, errors, which had not been discovered by established methods, were easily detected by our artificial neural network in tables of properties of the elements. We started using neural networks to discover unrecognised relationships between materials properties and quickly found that they were very good at finding inconsistencies in groups of data. They reveal variations from 10 to 900% in tables of property data for the elements and point out those that are most probably correct. Compared with the statistical method adopted by Ashby and co-workers [Proc. R. Soc. Lond. Ser. A 454 (1998) p. 1301, 1323], this method locates more inconsistencies and could be embedded in database software for automatic self-checking. We anticipate that our suggestion will be a starting point to deal with this basic problem that affects researchers in every field. The authors believe it may eventually moderate the current expectation that data field error rates will persist at between 1 and 5%.
Rapid detection of salmonella using SERS with silver nano-substrate
NASA Astrophysics Data System (ADS)
Sundaram, J.; Park, B.; Hinton, A., Jr.; Windham, W. R.; Yoon, S. C.; Lawrence, K. C.
2011-06-01
Surface Enhanced Raman Scattering (SERS) can detect the pathogen in rapid and accurate. In SERS weak Raman scattering signals are enhanced by many orders of magnitude. In this study silver metal with biopolymer was used. Silver encapsulated biopolymer polyvinyl alcohol nano-colloid was prepared and deposited on stainless steel plate. This was used as metal substrate for SERS. Salmonella typhimurium a common food pathogen was selected for this study. Salmonella typhimurium bacteria cells were prepared in different concentrations in cfu/mL. Small amount of these cells were loaded on the metal substrate individually, scanned and spectra were recorded using confocal Raman microscope. The cells were exposed to laser diode at 785 nm excitation and object 50x was used to focus the laser light on the sample. Raman shifts were obtained from 400 to 2400 cm-1. Multivariate data analysis was carried to predict the concentration of unknown sample using its spectra. Concentration prediction gave an R2 of 0.93 and standard error of prediction of 0.21. The results showed that it could be possible to find out the Salmonella cells present in a low concentration in food samples using SERS.
Cho, Pauline; Cheung, Sin Wan; Edwards, Marion
2005-01-01
Myopia is a common ocular disorder, and progression of myopia in children is of increasing concern. Modern overnight orthokeratology (ortho-k) is effective for myopic reduction and has been claimed to be effective in slowing the progression of myopia (myopic control) in children, although scientific evidence for this has been lacking. This 2 year pilot study was conducted to determine whether ortho-k can effectively reduce and control myopia in children. We monitored the growth of axial length (AL) and vitreous chamber depth (VCD) in 35 children (7-12 years of age), undergoing ortho-k treatment and compared the rates of change with 35 children wearing single-vision spectacles from an earlier study (control). For the ortho-k subjects, we also determined the changes in corneal curvature and the relationships with changes of refractive errors, AL and VCD. The baseline spherical equivalent refractive errors (SER), the AL, and VCD of the ortho-k and control subjects were not statistically different. All the ortho-k subjects found post-ortho-k unaided vision acceptable in the daytime. The residual SER at the end of the study was -0.18 +/- 0.69 D (dioptre) and the reduction (less myopic) in SER was 2.09 +/- 1.34 D (all values are mean +/- SD). At the end of 24 months, the increases in AL were 0.29 +/- 0.27 mm and 0.54 +/- 0.27 mm for the ortho-k and control groups, respectively (unpaired t test; p = 0.012); the increases in VCD were 0.23 +/- 0.25 mm and 0.48 +/- 0.26 mm for the ortho-k and control groups, respectively (p = 0.005). There was significant initial corneal flattening in the ortho-k group but no significant relationships were found between changes in corneal power and changes in AL and VCD. Ortho-k can have both a corrective and preventive/control effect in childhood myopia. However, there are substantial variations in changes in eye length among children and there is no way to predict the effect for individual subjects.
Ultrasensitive SERS Flow Detector Using Hydrodynamic Focusing
Negri, Pierre; Jacobs, Kevin T.; Dada, Oluwatosin O.; Schultz, Zachary D.
2013-01-01
Label-free, chemical specific detection in flow is important for high throughput characterization of analytes in applications such as flow injection analysis, electrophoresis, and chromatography. We have developed a surface-enhanced Raman scattering (SERS) flow detector capable of ultrasensitive optical detection on the millisecond time scale. The device employs hydrodynamic focusing to improve SERS detection in a flow channel where a sheath flow confines analyte molecules eluted from a fused silica capillary over a planar SERS-active substrate. Increased analyte interactions with the SERS substrate significantly improve detection sensitivity. The performance of this flow detector was investigated using a combination of finite element simulations, fluorescence imaging, and Raman experiments. Computational fluid dynamics based on finite element analysis was used to optimize the flow conditions. The modeling indicates that a number of factors, such as the capillary dimensions and the ratio of the sheath flow to analyte flow rates, are critical for obtaining optimal results. Sample confinement resulting from the flow dynamics was confirmed using wide-field fluorescence imaging of rhodamine 6G (R6G). Raman experiments at different sheath flow rates showed increased sensitivity compared with the modeling predictions, suggesting increased adsorption. Using a 50-millisecond acquisitions, a sheath flow rate of 180 μL/min, and a sample flow rate of 5 μL/min, a linear dynamic range from nanomolar to micromolar concentrations of R6G with a LOD of 1 nM is observed. At low analyte concentrations, rapid analyte desorption is observed, enabling repeated and high-throughput SERS detection. The flow detector offers substantial advantages over conventional SERS-based assays such as minimal sample volumes and high detection efficiency. PMID:24074461
Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia
2015-09-01
We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.
Refractive error and visual impairment in school children in Northern Ireland.
O'Donoghue, L; McClelland, J F; Logan, N S; Rudnicka, A R; Owen, C G; Saunders, K J
2010-09-01
To describe the prevalence of refractive error (myopia and hyperopia) and visual impairment in a representative sample of white school children. The Northern Ireland Childhood Errors of Refraction study, a population-based cross-sectional study, examined 661 white 12-13-year-old and 392 white 6-7-year-old children between 2006 and 2008. Procedures included assessment of monocular logarithm of the minimum angle of resolution (logMAR), visual acuity (unaided and presenting) and binocular open-field cycloplegic (1% cyclopentolate) autorefraction. Myopia was defined as -0.50DS or more myopic spherical equivalent refraction (SER) in either eye, hyperopia as > or =+2.00DS SER in either eye if not previously classified as myopic. Visual impairment was defined as >0.30 logMAR units (equivalent to 6/12). Levels of myopia were 2.8% (95% CI 1.3% to 4.3%) in younger and 17.7% (95% CI 13.2% to 22.2%) in older children: corresponding levels of hyperopia were 26% (95% CI 20% to 33%) and 14.7% (95% CI 9.9% to 19.4%). The prevalence of presenting visual impairment in the better eye was 3.6% in 12-13-year-old children compared with 1.5% in 6-7-year-old children. Almost one in four children fails to bring their spectacles to school. This study is the first to provide robust population-based data on the prevalence of refractive error and visual impairment in Northern Irish school children. Strategies to improve compliance with spectacle wear are required.
Morales-Alamo, David; Guerra, Borja; Santana, Alfredo; Martin-Rincon, Marcos; Gelabert-Rebato, Miriam; Dorado, Cecilia; Calbet, José A L
2018-01-01
Compared to normoxia, during sprint exercise in severe acute hypoxia the glycolytic rate is increased leading to greater lactate accumulation, acidification, and oxidative stress. To determine the role played by pyruvate dehydrogenase (PDH) activation and reactive nitrogen and oxygen species (RNOS) in muscle lactate accumulation, nine volunteers performed a single 30-s sprint (Wingate test) on four occasions: two after the ingestion of placebo and another two following the intake of antioxidants, while breathing either hypoxic gas (P I O 2 = 75 mmHg) or room air (P I O 2 = 143 mmHg). Vastus lateralis muscle biopsies were obtained before, immediately after, 30 and 120 min post-sprint. Antioxidants reduced the glycolytic rate without altering performance or VO 2 . Immediately after the sprints, Ser 293 - and Ser 300 -PDH-E1α phosphorylations were reduced to similar levels in all conditions (~66 and 91%, respectively). However, 30 min into recovery Ser 293 -PDH-E1α phosphorylation reached pre-exercise values while Ser 300 -PDH-E1α was still reduced by 44%. Thirty minutes after the sprint Ser 293 -PDH-E1α phosphorylation was greater with antioxidants, resulting in 74% higher muscle lactate concentration. Changes in Ser 293 and Ser 300 -PDH-E1α phosphorylation from pre to immediately after the sprints were linearly related after placebo ( r = 0.74, P < 0.001; n = 18), but not after antioxidants ingestion ( r = 0.35, P = 0.15). In summary, lactate accumulation during sprint exercise in severe acute hypoxia is not caused by a reduced activation of the PDH. The ingestion of antioxidants is associated with increased PDH re-phosphorylation and slower elimination of muscle lactate during the recovery period. Ser 293 re-phosphorylates at a faster rate than Ser 300 -PDH-E1α during the recovery period, suggesting slightly different regulatory mechanisms.
Morales-Alamo, David; Guerra, Borja; Santana, Alfredo; Martin-Rincon, Marcos; Gelabert-Rebato, Miriam; Dorado, Cecilia; Calbet, José A. L.
2018-01-01
Compared to normoxia, during sprint exercise in severe acute hypoxia the glycolytic rate is increased leading to greater lactate accumulation, acidification, and oxidative stress. To determine the role played by pyruvate dehydrogenase (PDH) activation and reactive nitrogen and oxygen species (RNOS) in muscle lactate accumulation, nine volunteers performed a single 30-s sprint (Wingate test) on four occasions: two after the ingestion of placebo and another two following the intake of antioxidants, while breathing either hypoxic gas (PIO2 = 75 mmHg) or room air (PIO2 = 143 mmHg). Vastus lateralis muscle biopsies were obtained before, immediately after, 30 and 120 min post-sprint. Antioxidants reduced the glycolytic rate without altering performance or VO2. Immediately after the sprints, Ser293- and Ser300-PDH-E1α phosphorylations were reduced to similar levels in all conditions (~66 and 91%, respectively). However, 30 min into recovery Ser293-PDH-E1α phosphorylation reached pre-exercise values while Ser300-PDH-E1α was still reduced by 44%. Thirty minutes after the sprint Ser293-PDH-E1α phosphorylation was greater with antioxidants, resulting in 74% higher muscle lactate concentration. Changes in Ser293 and Ser300-PDH-E1α phosphorylation from pre to immediately after the sprints were linearly related after placebo (r = 0.74, P < 0.001; n = 18), but not after antioxidants ingestion (r = 0.35, P = 0.15). In summary, lactate accumulation during sprint exercise in severe acute hypoxia is not caused by a reduced activation of the PDH. The ingestion of antioxidants is associated with increased PDH re-phosphorylation and slower elimination of muscle lactate during the recovery period. Ser293 re-phosphorylates at a faster rate than Ser300-PDH-E1α during the recovery period, suggesting slightly different regulatory mechanisms. PMID:29615918
Real-Time Implementation of Nonlinear Optical Processing Functions.
1986-09-30
information capacity) with the nonlinear error correction properties of associative neural nets such as the Hopfield model. Analogies between holography...symnolic ma.Ip’:ation Th.e error correcting -apart" :ty of non" ;n-ar associative merTtnies is necessary for s’uch structu-res Experimerta. results... geometrica snapes in contact ’A,.n a c-:’:ser ’Figure 51a’ ., and a spher:cal 4:verg.ng reference -eam Upion :"um’latlon of t -" c-’gram by the object beam
NASA Astrophysics Data System (ADS)
Pilipavicius, J.; Kaleinikaite, R.; Pucetaite, M.; Velicka, M.; Kareiva, A.; Beganskiene, A.
2016-07-01
In this work sol-gel process for preparation of the uniform hybrid silica-3-aminopropyltriethoxysilane (APTES) coatings on glass surface is presented from mechanistic point of view. The suggested synthetic approach is straightforward, scalable and provides the means to tune the amount of amino groups on the surface simply by changing concentration of APTES in the initial sol. Deposition rate of different size silver nanoprisms (AgNPRs) on hybrid silica coatings of various amounts of APTES were studied and their performance as SERS materials were probed. The acquired data shows that the deposition rate of AgNPRs can be tuned by changing the amount of APTES. The optimal amount of APTES was found to be crucial for successful AgNPRs assembly and subsequent uniformity of the final SERS substrate-too high APTES content may result in rapid non-stable aggregation and non-uniform assembly process. SERS study revealed that SERS enhancement is the strongest at moderate AgNPRs aggregation level whereas it significantly drops at high aggregation levels.
Tubert-Brohman, Ivan; Acevedo, Orlando; Jorgensen, William L
2006-12-27
Fatty acid amide hydrolase (FAAH) is a serine hydrolase that degrades anandamide, an endocannabinoid, and oleamide, a sleep-inducing lipid, and has potential applications as a therapeutic target for neurological disorders. Remarkably, FAAH hydrolyzes amides and esters with similar rates; however, the normal preference for esters reemerges when Lys142 is mutated to alanine. To elucidate the hydrolysis mechanisms and the causes behind this variation of selectivity, mixed quantum and molecular mechanics (QM/MM) calculations were carried out to obtain free-energy profiles for alternative mechanisms for the enzymatic hydrolyses. The methodology features free-energy perturbation calculations in Monte Carlo simulations with PDDG/PM3 as the QM method. For wild-type FAAH, the results support a mechanism, which features proton transfer from Ser217 to Lys142, simultaneous proton transfer from Ser241 to Ser217, and attack of Ser241 on the substrate's carbonyl carbon to yield a tetrahedral intermediate, which subsequently undergoes elimination with simultaneous protonation of the leaving group by a Lys142-Ser217 proton shuttle. For the Lys142Ala mutant, a striking multistep sequence is proposed with simultaneous proton transfer from Ser241 to Ser217, attack of Ser241 on the carbonyl carbon of the substrate, and elimination of the leaving group and its protonation by Ser217. Support comes from the free-energy results, which well reproduce the observation that the Lys142Ala mutation in FAAH decreases the rate of hydrolysis for oleamide significantly more than for methyl oleate.
Characteristics of Refractive Errors in a Population of Adults in the Central Region of Poland.
Nowak, Michal S; Jurowski, Piotr; Grzybowski, Andrzej; Smigielski, Janusz
2018-01-08
Background : To investigate the distribution of refractive errors and their characteristics in older adults from a Polish population. Methods : The study design was a cross-sectional study. A total of 1107 men and women were interviewed and underwent detailed ophthalmic examinations, 998 subjects underwent refraction. Myopia was defined as spherical equivalent (SER) refraction ≤-0.5 dioptres (D) and hyperopia was defined as SER ≥+0.5 dioptres (D). Results : Among those who were refracted the distribution of myopia and hyperopia was 24.1% (95% CI 21.4-26.7) and 37.5% (95% CI 34.5-40.5), respectively. Myopia decreased from 28.7% in subjects aged 35-59 years to 19.3% in those aged 60 years or older and hyperopia increased from 21.8% at 35-59 years of age to 53.3% in subjects aged ≥60 years. Multiple regression analysis showed decreasing age (OR 0.98, 95% CI 0.96-1.00), female gender (OR 1.87, 95% CI 1.18-2.95) and presence of cataract (OR 2.40, 95% CI 1.24-4.63) were independent risk factors associated with myopia. Conclusions : The distribution of refractive errors found in our study is similar to those reported in other Caucasian populations and differs from Asian populations. Myopia was positively associated with younger age, female gender and presence of cataract.
NASA Astrophysics Data System (ADS)
Bakar, N. A.; Salleh, M. M.; Umar, A. A.; Shapter, J. G.
2018-03-01
This paper reports a study on surface-enhanced Raman scattering (SERS) phenomenon of triangular silver nanoplate (NP) films towards bisphenol A (BPA) detection. The NP films were prepared using self-assembly technique with four different immersion times; 1 hour, 2 hours, 5 hours, and 8 hours. The SERS measurement was studied by observing the changes in Raman spectra of BPA after BPA absorbed on the NP films. It was found that the Raman intensity of BPA peaks was enhanced by using the prepared SERS substrates. This is clearly indicated that these SERS silver substrates are suitable to sense industrial chemical and potentially used as SERS detector. However, the rate of SERS enhancement is depended on the distribution of NP on the substrate surface.
Alharbi, Omar; Xu, Yun; Goodacre, Royston
2014-10-07
The detection and quantification of xenobiotics and their metabolites in man is important for drug dosing, therapy and for substance abuse monitoring where longer-lived metabolic products from illicit materials can be assayed after the drug of abuse has been cleared from the system. Raman spectroscopy offers unique specificity for molecular characterization and this usually weak signal can be significantly enhanced using surface enhanced Raman scattering (SERS). We report here the novel development of SERS with chemometrics for the simultaneous analysis of the drug nicotine and its major xenometabolites cotinine and trans-3'-hydroxycotinine. Initial experiments optimized the SERS conditions and we found that when these three determinands were analysed individually that the maximum SERS signals were found at three different pH. These were pH 3 for nicotine and pH 10 and 11 for cotinine and trans-3'-hydroxycotinine, respectively. Tertiary mixtures containing nicotine, cotinine and trans-3'-hydroxycotinine were generated in the concentration range 10(-7)-10(-5) M and SERS spectra were collected at all three pH values. Chemometric analysis using kernel-partial least squares (K-PLS) and artificial neural networks (ANNs) were conducted and these models were validated using bootstrap resampling. All three analytes were accurately quantified with typical root mean squared error of prediction on the test set data being 5-9%; nicotine was most accurately predicted followed by cotinine and then trans-3'-hydroxycotinine. We believe that SERS is a powerful approach for the simultaneous analysis of multiple determinands without recourse to lengthy chromatography, as demonstrated here for the xenobiotic nicotine and its two major xenometabolites.
2013-01-01
Background Wide use of ciprofloxacin and levofloxacin has often led to increased resistance. The resistance rate to these two agents varies in different clinical isolates of Enterobacteriaceae. Mutations of GyrA within the quinolone resistance-determining regions have been found to be the main mechanism for quinolone resistance in Enterobacteriaceae. It has been shown that only some of the mutations in the gyrA gene identified from clinical sources were involved in fluoroquinolone resistance. Whether different patterns of gyrA mutation are related to antimicrobial resistance against ciprofloxacin and levofloxacin is unclear. Methods The minimum inhibitory concentration (MIC) of ciprofloxacin and levofloxacin were determined by the agar dilution method followed by PCR amplification and sequencing of the quinolone resistance determining region of gyrA to identify all the mutation types. The correlation between fluoroquinolone resistance and the individual mutation type was analyzed. Results Resistance differences between ciprofloxacin and levofloxacin were found in 327 isolates of K. pneumoniae and E. coli in Harbin, China and in the isolates reported in PubMed publications. GyrA mutations were found in both susceptible and resistant isolates. For the isolates with QRDR mutations, the resistance rates to ciprofloxacin and levofloxacin were also statistically different. Among the 14 patterns of alterations, two single mutations (Ser83Tyr and Ser83Ile), and three double mutations (Ser83Leu+Asp87Asn, Ser83Leu+Asp87Tyr and Ser83Phe+Asp87Asn) were associated with both ciprofloxacin and levofloxacin resistance. Two single mutations (Ser83Phe and Ser83Leu) were related with ciprofloxacin resistance but not to levofloxacin. Resistance difference between ciprofloxacin and levofloxacin in isolates harboring mutation Ser83Leu+Asp87Asn were of statistical significance among all Enterobacteriaceae (P<0.001). Conclusions Resistance rate to ciprofloxacin and levofloxacin were statistically different among clinical isolates of Enterobacteriaceae harboring GyrA mutations. Ser83Leu+Asp87Asn may account for the antimicrobial resistance difference between ciprofloxacin and levofloxacin. PMID:23295059
Chemical agent detection by surface-enhanced Raman spectroscopy
NASA Astrophysics Data System (ADS)
Farquharson, Stuart; Gift, Alan; Maksymiuk, Paul; Inscore, Frank E.; Smith, Wayne W.; Morrisey, Kevin; Christesen, Steven D.
2004-03-01
In the past decade, the Unites States and its allies have been challenged by a different kind of warfare, exemplified by the terrorist attacks of September 11, 2001. Although suicide bombings are the most often used form of terror, military personnel must consider a wide range of attack scenarios. Among these is the intentional poisoning of water supplies to obstruct military operations in Afghanistan and Iraq. To counter such attacks, the military is developing portable analyzers that can identify and quantify potential chemical agents in water supplies at microgram per liter concentrations within 10 minutes. To aid this effort we have been investigating the value of a surface-enhanced Raman spectroscopy based portable analyzer. In particular we have been developing silver-doped sol-gels to generate SER spectra of chemical agents and their hydrolysis products. Here we present SER spectra of several chemical agents measured in a generic tap water. Repeat measurements were performed to establish statistical error associated with SERS obtained using the sol-gel coated vials.
Fugitive emission rates assessment of PM2.5 and PM10 from open storage piles in China
NASA Astrophysics Data System (ADS)
Cao, Yiqi; Liu, Tao; He, Jiao
2018-03-01
An assessment of the fugitive emission rates of PM2.5 and PM10 from an open static coal and mine storage piles. The experiment was conducted at a large union steel enterprises in the East China region to effectively control the fugitive particulate emissions pollution on daily work and extreme weather conditions. Wind tunnel experiments conducted on the surface of static storage piles, and it generated specific fugitive emission rates (SERs) at ground level of between ca.10-1 and ca.102 (mg/m2·s) for PM2.5 and between ca.101 and ca.103 (mg/m2·s) for PM10 under the u*(wind velocity) between ca.3.0 (m/s) and 10.0 (m/s). Research results show that SERs of different materials differ a lot. Material particulate that has lower surface moisture content generate higher SER and coal material generate higher SER than mine material. For material storage piles with good water infiltrating properties, aspersion is a very effective measure for control fugitive particulate emission.
Surface-enhanced Raman spectroscopic study of p-aminothiophenol.
Huang, Yi-Fan; Wu, De-Yin; Zhu, Hong-Ping; Zhao, Liu-Bin; Liu, Guo-Kun; Ren, Bin; Tian, Zhong-Qun
2012-06-28
p-aminothiophenol (PATP) is an important molecule for surface-enhanced Raman spectroscopy (SERS). It can strongly interact with metallic SERS substrates and produce very strong SERS signals. It is a molecule that has often been used for mechanistic studies of the SERS mechanism as the photon-driven charge transfer (CT) mechanism is believed to be present for this molecule. Recently, a hot debate over the SERS behavior of PATP was triggered by our finding that PATP can be oxidatively transformed into 4,4'-dimercaptoazobenzene (DMAB), which gives a SERS spectra of so-called "b2 modes". In this perspective, we will give a general overview of the SERS mechanism and the current status of SERS studies on PATP. We will then demonstrate with our experimental and theoretical evidence that it is DMAB which contributes to the characteristic SERS behavior in the SERS spectra of PATP and analyze some important experimental phenomena in the framework of the surface reaction instead of the contribution "b2 modes". We will then point out the existing challenges of the present system. A clear understanding of the reaction mechanism for nitrobenzene or aromatic benzene will be important to not only understand the SERS mechanism but to also provide an economic way of producing azo dyes with a very high selectivity and conversion rate.
Residue frequencies and pairing preferences at protein-protein interfaces.
Glaser, F; Steinberg, D M; Vakser, I A; Ben-Tal, N
2001-05-01
We used a nonredundant set of 621 protein-protein interfaces of known high-resolution structure to derive residue composition and residue-residue contact preferences. The residue composition at the interfaces, in entire proteins and in whole genomes correlates well, indicating the statistical strength of the data set. Differences between amino acid distributions were observed for interfaces with buried surface area of less than 1,000 A(2) versus interfaces with area of more than 5,000 A(2). Hydrophobic residues were abundant in large interfaces while polar residues were more abundant in small interfaces. The largest residue-residue preferences at the interface were recorded for interactions between pairs of large hydrophobic residues, such as Trp and Leu, and the smallest preferences for pairs of small residues, such as Gly and Ala. On average, contacts between pairs of hydrophobic and polar residues were unfavorable, and the charged residues tended to pair subject to charge complementarity, in agreement with previous reports. A bootstrap procedure, lacking from previous studies, was used for error estimation. It showed that the statistical errors in the set of pairing preferences are generally small; the average standard error is approximately 0.2, i.e., about 8% of the average value of the pairwise index (2.9). However, for a few pairs (e.g., Ser-Ser and Glu-Asp) the standard error is larger in magnitude than the pairing index, which makes it impossible to tell whether contact formation is favorable or unfavorable. The results are interpreted using physicochemical factors and their implications for the energetics of complex formation and for protein docking are discussed. Proteins 2001;43:89-102. Copyright 2001 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Celik, Cihangir
Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement results show that soft errors in the memories increase proportionally with the neutron flux, whereas they decrease with increasing the supply voltages. NISC design considerations include the effects of device scaling, 10B content in the BPSG layer, incoming neutron energy, and critical charge of the node for this dissertation. NISCSAT simulations were performed with various memory node models to account these effects. Device scaling simulations showed that any further increase in the thickness of the BPSG layer beyond 2 mum causes self-shielding of the incoming neutrons due to the BPSG layer and results in lower detection efficiencies. Moreover, if the BPSG layer is located more than 4 mum apart from the depletion region in the node, there are no soft errors in the node due to the fact that both of the reaction products have lower ranges in the silicon or any possible node layers. Calculation results regarding the critical charge indicated that the mean charge deposition of the reaction products in the sensitive volume of the node is about 15 fC. It is evident that the NISC design should have a memory architecture with a critical charge of 15 fC or less to obtain higher detection efficiencies. Moreover, the sensitive volume should be placed in close proximity to the BPSG layers so that its location would be within the range of alpha and 7Li particles. Results showed that the distance between the BPSG layer and the sensitive volume should be less than 2 mum to increase the detection efficiency of the NISC. Incoming neutron energy was also investigated by simulations and the results obtained from these simulations showed that NISC neutron detection efficiency is related with the neutron cross-sections of 10B (n,alpha) 7Li reaction, e.g., ratio of the thermal (0.0253 eV) to fast (2 MeV) neutron detection efficiencies is approximately equal to 8000:1. Environmental conditions and their effects on the NISC performance were also studied in this research. Cosmic rays were modeled and simulated via NISCSAT to investigate detection reliability of the NISC. Simulation results show that cosmic rays account for less than 2 % of the soft errors for the thermal neutron detection. On the other hand, fast neutron detection by the NISC, which already has a poor efficiency due to the low neutron cross-sections, becomes almost impossible at higher altitudes where the cosmic ray fluxes and their energies are higher. NISCSAT simulations regarding soft error dependency of the NISC for temperature and electromagnetic fields show that there are no significant effects in the NISC detection efficiency. Furthermore, the detection efficiency of the NISC decreases with both air humidity and use of moderators since the incoming neutrons scatter away before reaching the memory surface.
NASA Astrophysics Data System (ADS)
Guo, Jia; Xu, Shicai; Liu, Xiaoyun; Li, Zhe; Hu, Litao; Li, Zhen; Chen, Peixi; Ma, Yong; Jiang, Shouzhen; Ning, Tingyin
2017-02-01
In our work, few layers graphene oxide (GO) were directly synthesized on Ag nanoparticles (AgNPs) by spin-coating method to fabricate a GO-AgNPs hybrid structure on a pyramidal silicon (PSi) substrate for surface-enhanced Raman scattering (SERS). The GO-AgNPs-PSi substrate showed excellent Raman enhancement effect, the minimum detected concentration for Rhodamine 6G (R6G) can reach 10-12 M, which is one order of magnitude lower than the AgNPs-PSi substrate and two order of magnitude lower than the GO-AgNPs-flat-Si substrate. The linear fit calibration curve with error bars is presented and the value of R2 of 612 and 773 cm-1 can reach 0.986 and 0.980, respectively. The excellent linear response between the Raman intensity and R6G concentrations prove that the prepared GO-AgNPs-PSi substrates can serve as good SERS substrate for molecule detection. The maximum deviations of SERS intensities from 20 positions of the GO-AgNPs-PSi substrate are less than 8%, revealing the high homogeneity of the SERS substrate. The excellent homogeneity of the enhanced Raman signals can be attributed to well-separated pyramid arrays of PSi, the uniform morphology of AgNPs and multi-functions of GO layer. Besides, the uniform GO film can effectively protect AgNPs from oxidation and endow the hybrid system a good stability and long lifetime. This GO-AgNPs-PSi substrate may provide a new way toward practical applications for the ultrasensitive and label-free SERS detection in areas of medicine, food safety and biotechnology.
Second Language Acquisition of Variable Structures in Spanish by Portuguese Speakers
ERIC Educational Resources Information Center
Geeslin, Kimberly L.; Guijarro-Fuentes, Pedro
2006-01-01
This study provides a model for examining the second language (L2) acquisition of structures where the first language (L1) and (L2) are similar, and where native speaker (NS) use varies. Research on the copula contrast in Spanish ("ser" and "estar" mean "to be") has shown that an assessment of learner choice cannot rely on an error analysis…
Rolo, Dora; Fenoll, Asunción; Fontanals, Dionísia; Larrosa, Nieves; Giménez, Montserrat; Grau, Immaculada; Pallarés, Román; Liñares, Josefina; Ardanuy, Carmen
2013-11-01
In this study, we analyzed the clinical and molecular epidemiology of invasive serotype 5 (Ser5) pneumococcal isolates in four teaching hospitals in the Barcelona, Spain, area (from 1997 to 2011). Among 5,093 invasive pneumococcal isolates collected, 134 (2.6%) Ser5 isolates were detected. Although the overall incidence of Ser5-related invasive pneumococcal disease (IPD) was low (0.25 cases/100,000 inhabitants), three incidence peaks were detected: 0.63/100,000 in 1999, 1.15/100,000 in 2005, and 0.37/100,000 in 2009. The rates of Ser5 IPD were higher among young adults (18 to 64 years old) and older adults (>64 years old) in the first two peaks, whereas they were higher among children in 2009. The majority (88.8%) of the patients presented with pneumonia. Comorbid conditions were present in young adults (47.6%) and older adults (78.7%), the most common comorbid conditions being chronic obstructive pulmonary disease (20.6% and 38.3%, respectively) and cardiovascular diseases (11.1% and 38.3%, respectively). The mortality rates were higher among older adults (8.5%). All Ser5 pneumococci tested were fully susceptible to penicillin, cefotaxime, erythromycin, and ciprofloxacin. The resistance rates were 48.5% for co-trimoxazole, 6.7% for chloramphenicol, and 6% for tetracycline. Two major related sequence types (STs), ST1223 (n = 65) and ST289 (n = 61), were detected. The Colombia(5)-ST289 clone was responsible for all the cases in the Ser5 outbreak in 1999, whereas the ST1223 clone accounted for 73.8% and 61.5% of the isolates in 2005 and 2009, respectively. Ser5 pneumococci are a frequent cause of IPD outbreaks in the community and involve children and adults with or without comorbidities. The implementation of the new pneumococcal conjugated vaccines (PCV10 and PCV13) might prevent such outbreaks.
Characteristics of Refractive Errors in a Population of Adults in the Central Region of Poland
Jurowski, Piotr; Grzybowski, Andrzej; Smigielski, Janusz
2018-01-01
Background: To investigate the distribution of refractive errors and their characteristics in older adults from a Polish population. Methods: The study design was a cross-sectional study. A total of 1107 men and women were interviewed and underwent detailed ophthalmic examinations, 998 subjects underwent refraction. Myopia was defined as spherical equivalent (SER) refraction ≤−0.5 dioptres (D) and hyperopia was defined as SER ≥+0.5 dioptres (D). Results: Among those who were refracted the distribution of myopia and hyperopia was 24.1% (95% CI 21.4–26.7) and 37.5% (95% CI 34.5–40.5), respectively. Myopia decreased from 28.7% in subjects aged 35–59 years to 19.3% in those aged 60 years or older and hyperopia increased from 21.8% at 35–59 years of age to 53.3% in subjects aged ≥60 years. Multiple regression analysis showed decreasing age (OR 0.98, 95% CI 0.96–1.00), female gender (OR 1.87, 95% CI 1.18–2.95) and presence of cataract (OR 2.40, 95% CI 1.24–4.63) were independent risk factors associated with myopia. Conclusions: The distribution of refractive errors found in our study is similar to those reported in other Caucasian populations and differs from Asian populations. Myopia was positively associated with younger age, female gender and presence of cataract. PMID:29316688
Kahan, Tracey L; Claudatos, Stephanie
2016-04-01
Self-ratings of dream experiences were obtained from 144 college women for 788 dreams, using the Subjective Experiences Rating Scale (SERS). Consistent with past studies, dreams were characterized by a greater prevalence of vision, audition, and movement than smell, touch, or taste, by both positive and negative emotion, and by a range of cognitive processes. A Principal Components Analysis of SERS ratings revealed ten subscales: four sensory, three affective, one cognitive, and two structural (events/actions, locations). Correlations (Pearson r) among subscale means showed a stronger relationship among the process-oriented features (sensory, cognitive, affective) than between the process-oriented and content-centered (structural) features--a pattern predicted from past research (e.g., Bulkeley & Kahan, 2008). Notably, cognition and positive emotion were associated with a greater number of other phenomenal features than was negative emotion; these findings are consistent with studies of the qualitative features of waking autobiographical memory (e.g., Fredrickson, 2001). Copyright © 2016 Elsevier Inc. All rights reserved.
Kuster, Diederik W D; Sequeira, Vasco; Najafi, Aref; Boontje, Nicky M; Wijnker, Paul J M; Witjas-Paalberends, E Rosalie; Marston, Steven B; Dos Remedios, Cristobal G; Carrier, Lucie; Demmers, Jeroen A A; Redwood, Charles; Sadayappan, Sakthivel; van der Velden, Jolanda
2013-02-15
Cardiac myosin-binding protein C (cMyBP-C) regulates cross-bridge cycling kinetics and, thereby, fine-tunes the rate of cardiac muscle contraction and relaxation. Its effects on cardiac kinetics are modified by phosphorylation. Three phosphorylation sites (Ser275, Ser284, and Ser304) have been identified in vivo, all located in the cardiac-specific M-domain of cMyBP-C. However, recent work has shown that up to 4 phosphate groups are present in human cMyBP-C. To identify and characterize additional phosphorylation sites in human cMyBP-C. Cardiac MyBP-C was semipurified from human heart tissue. Tandem mass spectrometry analysis identified a novel phosphorylation site on serine 133 in the proline-alanine-rich linker sequence between the C0 and C1 domains of cMyBP-C. Unlike the known sites, Ser133 was not a target of protein kinase A. In silico kinase prediction revealed glycogen synthase kinase 3β (GSK3β) as the most likely kinase to phosphorylate Ser133. In vitro incubation of the C0C2 fragment of cMyBP-C with GSK3β showed phosphorylation on Ser133. In addition, GSK3β phosphorylated Ser304, although the degree of phosphorylation was less compared with protein kinase A-induced phosphorylation at Ser304. GSK3β treatment of single membrane-permeabilized human cardiomyocytes significantly enhanced the maximal rate of tension redevelopment. GSK3β phosphorylates cMyBP-C on a novel site, which is positioned in the proline-alanine-rich region and increases kinetics of force development, suggesting a noncanonical role for GSK3β at the sarcomere level. Phosphorylation of Ser133 in the linker domain of cMyBP-C may be a novel mechanism to regulate sarcomere kinetics.
Lee, Sora; Tumolo, Jessica M; Ehlinger, Aaron C; Jernigan, Kristin K; Qualls-Histed, Susan J; Hsu, Pi-Chiang; McDonald, W Hayes; Chazin, Walter J
2017-01-01
Despite its central role in protein degradation little is known about the molecular mechanisms that sense, maintain, and regulate steady state concentration of ubiquitin in the cell. Here, we describe a novel mechanism for regulation of ubiquitin homeostasis that is mediated by phosphorylation of ubiquitin at the Ser57 position. We find that loss of Ppz phosphatase activity leads to defects in ubiquitin homeostasis that are at least partially attributable to elevated levels of Ser57 phosphorylated ubiquitin. Phosphomimetic mutation at the Ser57 position of ubiquitin conferred increased rates of endocytic trafficking and ubiquitin turnover. These phenotypes are associated with bypass of recognition by endosome-localized deubiquitylases - including Doa4 which is critical for regulation of ubiquitin recycling. Thus, ubiquitin homeostasis is significantly impacted by the rate of ubiquitin flux through the endocytic pathway and by signaling pathways that converge on ubiquitin itself to determine whether it is recycled or degraded in the vacuole. PMID:29130884
Kim, H-J; Kim, J-H; Oh, H-J; Oh, D-K
2006-07-01
Characterization of a mutated Geobacillus stearothermophilus L-arabinose isomerase used to increase the production rate of D-tagatose. A mutated gene was obtained by an error-prone polymerase chain reaction using L-arabinose isomerase gene from G. stearothermophilus as a template and the gene was expressed in Escherichia coli. The expressed mutated L-arabinose isomerase exhibited the change of three amino acids (Met322-->Val, Ser393-->Thr, and Val408-->Ala), compared with the wild-type enzyme and was then purified to homogeneity. The mutated enzyme had a maximum galactose isomerization activity at pH 8.0, 65 degrees C, and 1.0 mM Co2+, while the wild-type enzyme had a maximum activity at pH 8.0, 60 degrees C, and 1.0-mM Mn2+. The mutated L-arabinose isomerase exhibited increases in D-galactose isomerization activity, optimum temperature, catalytic efficiency (kcat/Km) for D-galactose, and the production rate of D-tagatose from D-galactose. The mutated L-arabinose isomerase from G. stearothermophilus is valuable for the commercial production of D-tagatose. This work contributes knowledge on the characterization of a mutated L-arabinose isomerase, and allows an increased production rate for D-tagatose from D-galactose using the mutated enzyme.
Kazlauskaite, Agne; Martínez-Torres, R Julio; Wilkie, Scott; Kumar, Atul; Peltier, Julien; Gonzalez, Alba; Johnson, Clare; Zhang, Jinwei; Hope, Anthony G; Peggie, Mark; Trost, Matthias; van Aalten, Daan MF; Alessi, Dario R; Prescott, Alan R; Knebel, Axel; Walden, Helen; Muqit, Miratul MK
2015-01-01
Mutations in the mitochondrial protein kinase PINK1 are associated with autosomal recessive Parkinson disease (PD). We and other groups have reported that PINK1 activates Parkin E3 ligase activity both directly via phosphorylation of Parkin serine 65 (Ser65)—which lies within its ubiquitin-like domain (Ubl)—and indirectly through phosphorylation of ubiquitin at Ser65. How Ser65-phosphorylated ubiquitin (ubiquitinPhospho-Ser65) contributes to Parkin activation is currently unknown. Here, we demonstrate that ubiquitinPhospho-Ser65 binding to Parkin dramatically increases the rate and stoichiometry of Parkin phosphorylation at Ser65 by PINK1 in vitro. Analysis of the Parkin structure, corroborated by site-directed mutagenesis, shows that the conserved His302 and Lys151 residues play a critical role in binding of ubiquitinPhospho-Ser65, thereby promoting Parkin Ser65 phosphorylation and activation of its E3 ligase activity in vitro. Mutation of His302 markedly inhibits Parkin Ser65 phosphorylation at the mitochondria, which is associated with a marked reduction in its E3 ligase activity following mitochondrial depolarisation. We show that the binding of ubiquitinPhospho-Ser65 to Parkin disrupts the interaction between the Ubl domain and C-terminal region, thereby increasing the accessibility of Parkin Ser65. Finally, purified Parkin maximally phosphorylated at Ser65 in vitro cannot be further activated by the addition of ubiquitinPhospho-Ser65. Our results thus suggest that a major role of ubiquitinPhospho-Ser65 is to promote PINK1-mediated phosphorylation of Parkin at Ser65, leading to maximal activation of Parkin E3 ligase activity. His302 and Lys151 are likely to line a phospho-Ser65-binding pocket on the surface of Parkin that is critical for the ubiquitinPhospho-Ser65 interaction. This study provides new mechanistic insights into Parkin activation by ubiquitinPhospho-Ser65, which could aid in the development of Parkin activators that mimic the effect of ubiquitinPhospho-Ser65. PMID:26116755
Kazlauskaite, Agne; Martínez-Torres, R Julio; Wilkie, Scott; Kumar, Atul; Peltier, Julien; Gonzalez, Alba; Johnson, Clare; Zhang, Jinwei; Hope, Anthony G; Peggie, Mark; Trost, Matthias; van Aalten, Daan M F; Alessi, Dario R; Prescott, Alan R; Knebel, Axel; Walden, Helen; Muqit, Miratul M K
2015-08-01
Mutations in the mitochondrial protein kinase PINK1 are associated with autosomal recessive Parkinson disease (PD). We and other groups have reported that PINK1 activates Parkin E3 ligase activity both directly via phosphorylation of Parkin serine 65 (Ser(65))--which lies within its ubiquitin-like domain (Ubl)--and indirectly through phosphorylation of ubiquitin at Ser(65). How Ser(65)-phosphorylated ubiquitin (ubiquitin(Phospho-Ser65)) contributes to Parkin activation is currently unknown. Here, we demonstrate that ubiquitin(Phospho-Ser65) binding to Parkin dramatically increases the rate and stoichiometry of Parkin phosphorylation at Ser(65) by PINK1 in vitro. Analysis of the Parkin structure, corroborated by site-directed mutagenesis, shows that the conserved His302 and Lys151 residues play a critical role in binding of ubiquitin(Phospho-Ser65), thereby promoting Parkin Ser(65) phosphorylation and activation of its E3 ligase activity in vitro. Mutation of His302 markedly inhibits Parkin Ser(65) phosphorylation at the mitochondria, which is associated with a marked reduction in its E3 ligase activity following mitochondrial depolarisation. We show that the binding of ubiquitin(Phospho-Ser65) to Parkin disrupts the interaction between the Ubl domain and C-terminal region, thereby increasing the accessibility of Parkin Ser(65). Finally, purified Parkin maximally phosphorylated at Ser(65) in vitro cannot be further activated by the addition of ubiquitin(Phospho-Ser65). Our results thus suggest that a major role of ubiquitin(Phospho-Ser65) is to promote PINK1-mediated phosphorylation of Parkin at Ser(65), leading to maximal activation of Parkin E3 ligase activity. His302 and Lys151 are likely to line a phospho-Ser(65)-binding pocket on the surface of Parkin that is critical for the ubiquitin(Phospho-Ser65) interaction. This study provides new mechanistic insights into Parkin activation by ubiquitin(Phospho-Ser65), which could aid in the development of Parkin activators that mimic the effect of ubiquitin(Phospho-Ser65). © 2015 The Authors. Published under the terms of the CC BY 4.0 license.
ERIC Educational Resources Information Center
Allalouf, Avi
2007-01-01
There is significant potential for error in long production processes that consist of sequential stages, each of which is heavily dependent on the previous stage, such as the SER (Scoring, Equating, and Reporting) process. Quality control procedures are required in order to monitor this process and to reduce the number of mistakes to a minimum. In…
Plasmonic nanohole array for enhancing the SERS signal of a single layer of graphene in water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahigir, Amirreza; Chang, Te-Wei; Behnam, Ashkan
In this study, we numerically design and experimentally test a SERS-active substrate for enhancing the SERS signal of a single layer of graphene (SLG) in water. The SLG is placed on top of an array of silver-covered nanoholes in a polymer and is covered with water. Here we report a large enhancement of up to 2×10 5 in the SERS signal of the SLG on the patterned plasmonic nanostructure for a 532nm excitation laser wavelength. We provide a detailed study of the light-graphene interactions by investigating the optical absorption in the SLG, the density of optical states at the locationmore » of the SLG, and the extraction efficiency of the SERS signal of the SLG. Our numerical calculations of both the excitation field and the emission rate enhancements support the experimental results. We find that the enhancement is due to the increase in the confinement of electromagnetic fields on the location of the SLG that results in enhanced light absorption in the graphene at the excitation wavelength. We also find that water droplets increase the density of optical radiative states at the location of the SLG, leading to enhanced spontaneous emission rate of graphene at its Raman emission wavelengths.« less
Plasmonic nanohole array for enhancing the SERS signal of a single layer of graphene in water
Mahigir, Amirreza; Chang, Te-Wei; Behnam, Ashkan; ...
2017-10-25
In this study, we numerically design and experimentally test a SERS-active substrate for enhancing the SERS signal of a single layer of graphene (SLG) in water. The SLG is placed on top of an array of silver-covered nanoholes in a polymer and is covered with water. Here we report a large enhancement of up to 2×10 5 in the SERS signal of the SLG on the patterned plasmonic nanostructure for a 532nm excitation laser wavelength. We provide a detailed study of the light-graphene interactions by investigating the optical absorption in the SLG, the density of optical states at the locationmore » of the SLG, and the extraction efficiency of the SERS signal of the SLG. Our numerical calculations of both the excitation field and the emission rate enhancements support the experimental results. We find that the enhancement is due to the increase in the confinement of electromagnetic fields on the location of the SLG that results in enhanced light absorption in the graphene at the excitation wavelength. We also find that water droplets increase the density of optical radiative states at the location of the SLG, leading to enhanced spontaneous emission rate of graphene at its Raman emission wavelengths.« less
Ultrasensitive sliver nanorods array SERS sensor for mercury ions.
Song, Chunyuan; Yang, Boyue; Zhu, Yu; Yang, Yanjun; Wang, Lianhui
2017-01-15
With years of outrageous mercury emissions, there is an urgent need to develop convenient and sensitive methods for detecting mercury ions in response to increasingly serious mercury pollution in water. In the present work, a portable, ultrasensitive SERS sensor is proposed and utilized for detecting trace mercury ions in water. The SERS sensor is prepared on an excellent sliver nanorods array SERS substrate by immobilizing T-component oligonucleotide probes labeled with dye on the 3'-end and -SH on the 5'-end. The SERS sensor responses to the specific chemical bonding between thymine and mercury ions, which causes the previous flexible single strand of oligonucleotide probe changing into rigid and upright double chain structure. Such change in the structure drives the dyes far away from the excellent SERS substrate and results in a SERS signal attenuation of the dye. Therefore, by monitoring the decay of SERS signal of the dye, mercury ions in water can be detected qualitatively and quantitatively. The experimental results indicate that the proposed optimal SERS sensor owns a linear response with wide detecting range from 1pM to 1μM, and a detection limit of 0.16pM is obtained. In addition, the SERS sensor demonstrates good specificity for Hg 2+ , which can accurately identify trace mercury ions from a mixture of ten kinds of other ions. The SERS sensor has been further executed to analyze the trace mercury ions in tap water and lake water respectively, and good recovery rates are obtained for sensing both kinds of water. With its high selectivity and good portability, the ultrasensitive SERS sensor is expected to be a promising candidate for discriminating mercury ions in the fields of environmental monitoring and food safety. Copyright © 2016 Elsevier B.V. All rights reserved.
Lin, Donghai; Qin, Tianqi; Wang, Yunqing; Sun, Xiuyan; Chen, Lingxin
2014-01-22
As novel optical nanoprobes, surface-enhanced Raman scattering (SERS) tags have drawn growing interests in the application of biomedical imaging and phototherapies. Herein, we demonstrated a novel in situ synthesis strategy for GO wrapped gold nanocluster SERS tags by using a tris(2,2'-bipyridyl)ruthenium(II) chloride (Rubpy)/GO nanohybrid as a complex Raman reporter, inspired by the role of GO as an artificial receptor for various dyes. The introduction of GO in the synthesis procedure provided systematic solutions for controlling several key parameters of SERS tags, including reproducibility, sensitivity, and colloidal and signal stability. An additional interesting thermal-sensitive SERS property (SERS intensity decreased upon increasing the temperature) was also achieved due to the heat-induced release/redistribution of reporter molecules adsorbed on GO. Combining the synergic effect of these features, we further fabricated multifunctional, aldehyde group conjugated Au@Rubpy/GO SERS tags for optical labeling and photothermal ablation of bacteria. Sensitive Raman imaging of gram-positive (Staphylococcus aureus) and gram-negative (Escherichia coli) bacteria could be realized, and satisfactory photothermal killing efficacy for both bacteria was achieved. Our results also demonstrated the correlation among the SERS intensity decrease ratio, bacteria survival rate, and the terminal temperature of the tag-bacteria suspension, showing the possibility to use SERS assay to measure antibacterial response during the photothermal process using this tag.
Radzol, A R M; Lee, Khuan Y; Mansor, W
2013-01-01
SERS is a form of Raman spectroscopy that is enhanced with nano-sensing chip as substrate. It can yield distinct biochemical fingerprint for molecule of solids, liquids and gases. Vice versa, it can be used to identify unknown molecule. It has further advantage of being non-invasive, non-contact and cheap, as compared to other existing laboratory based techniques. NS1 has been clinically accepted as an alternative biomarker to IgM in diagnosing viral diseases carried by virus of flaviviridae. Its presence in the blood serum at febrile stage of the flavivirus infection has been proven. Being an antigen, it allows early detection that can help to reduce the mortality rate. This paper proposes SERS as a technique for detection of NS1 from its scattering spectrum. Contribution from our work so far has never been reported. From our experiments, it is found that NS1 protein is Raman active. Its spectrum exhibits five prominent peaks at Raman shift of 548, 1012, 1180, 1540 and 1650 cm(-1). Of these, peak at 1012 cm(-1) scales the highest intensity. It is singled out as the peak to fingerprint the NS1 protein. This is because its presence is verified by the ring breathing vibration of the benzene ring structure side chain molecule. The characteristic peak is found to vary in proportion to concentration. It is found that for a 99% change in concentration, a 96.7% change in intensity is incurred. This yields a high sensitivity of about one a.u. per ppm. Further investigation from the characterization graph shows a correlation coefficient of 0.9978 and a standard error estimation of 0.02782, which strongly suggests a linear relationship between the concentration and characteristic peak intensity of NS1. Our finding produces favorable evidence to the use of SERS technique for detection of NS1 protein for early detection of flavivirus infected diseases with gold substrate.
Seidel, Gerald; Diel, Marco; Fuchsbauer, Norbert; Hillen, Wolfgang
2005-05-01
The phosphoproteins HPrSerP and CrhP are the main effectors for CcpA-mediated carbon catabolite regulation (CCR) in Bacillus subtilis. Complexes of CcpA with HPrSerP or CrhP regulate genes by binding to the catabolite responsive elements (cre). We present a quantitative analysis of HPrSerP and CrhP interaction with CcpA by surface plasmon resonance (SPR) revealing small and similar equilibrium constants of 4.8 +/- 0.4 microm for HPrSerP-CcpA and 19.1 +/- 2.5 microm for CrhP-CcpA complex dissociation. Forty millimolar fructose-1,6-bisphosphate (FBP) or glucose-6-phosphate (Glc6-P) increases the affinity of HPrSerP to CcpA at least twofold, but have no effect on CrhP-CcpA binding. Saturation of binding of CcpA to cre as studied by fluorescence and SPR is dependent on 50 microm of HPrSerP or > 200 microm CrhP. The rate constants of HPrSerP-CcpA-cre complex formation are k(a) = 3 +/- 1 x 10(6) m(-1).s(-1) and k(d) = 2.0 +/- 0.4 x 10(-3).s(-1), resulting in a K(D) of 0.6 +/- 0.3 nm. FBP and Glc6-P stimulate CcpA-HPrSerP but not CcpA-CrhP binding to cre. Maximal HPrSerP-CcpA-cre complex formation in the presence of 10 mm FBP requires about 10-fold less HPrSerP. These data suggest a specific role for FBP and Glc6-P in enhancing only HPrSerP-mediated CCR.
NASA Astrophysics Data System (ADS)
Kemmlein, Sabine; Hahn, Oliver; Jann, Oliver
The emissions of selected flame retardants were measured in 1- and 0.02-m 3 emission test chambers and 0.001-m 3 emission test cells. Four product groups were of interest: insulating materials, assembly foam, upholstery/mattresses, and electronics equipment. The experiments were performed under constant environmental conditions (23°C, 50% RH) using a fixed sample surface area and controlled air flow rates. Tris (2-chloro-isopropyl)phosphate (TCPP) was observed to be one of the most commonly emitted organophosphate flame retardants in polyurethane foam applications. Depending on the sample type, area-specific emission rates (SER a) of TCPP varied between 20 ng m -2 h -1 and 140 μg m -2 h -1. The emissions from electronic devices were measured at 60°C to simulate operating conditions. Under these conditions, unit specific emission rates (SER u) of organophosphates were determined to be 10-85 ng unit -1 h -1. Increasing the temperature increased the emission of several flame retardants by up to a factor of 500. The results presented in this paper indicate that emissions of several brominated and organophosphate flame retardants are measurable. Polybrominated diphenylethers exhibited an SER a of between 0.2 and 6.6 ng m -2 h -1 and an SER u of between 0.6 and 14.2 ng unit -1 h -1. Because of sink effects, i.e., sorption to chamber components, the emission test chambers and cells used in this study have limited utility for substances low vapour pressures, especially the highly brominated compounds; hexabromocyclododecane had an SER a of between 0.1 and 29 ng m -2 h -1 and decabromodiphenylether was not detectable at all.
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
SEE rate estimation based on diffusion approximation of charge collection
NASA Astrophysics Data System (ADS)
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
Lindgren, N; Xu, Z Q; Lindskog, M; Herrera-Marschitz, M; Goiny, M; Haycock, J; Goldstein, M; Hökfelt, T; Fisone, G
2000-06-01
The activity of tyrosine hydroxylase, the rate-limiting enzyme in the biosynthesis of dopamine, is stimulated by phosphorylation. In this study, we examined the effects of activation of NMDA receptors on the state of phosphorylation and activity of tyrosine hydroxylase in rat striatal slices. NMDA produced a time-and concentration-dependent increase in the levels of phospho-Ser(19)-tyrosine hydroxylase in nigrostriatal nerve terminals. This increase was not associated with any changes in the basal activity of tyrosine hydroxylase, measured as DOPA accumulation. Forskolin, an activator of adenylyl cyclase, stimulated tyrosine hydroxylase phosphorylation at Ser(40) and caused a significant increase in DOPA accumulation. NMDA reduced forskolin-mediated increases in both Ser(40) phosphorylation and DOPA accumulation. In addition, NMDA reduced the increase in phospho-Ser(40)-tyrosine hydroxylase produced by okadaic acid, an inhibitor of protein phosphatase 1 and 2A, but not by a cyclic AMP analogue, 8-bromo-cyclic AMP. These results indicate that, in the striatum, glutamate decreases tyrosine hydroxylase phosphorylation at Ser(40) via activation of NMDA receptors by reducing cyclic AMP production. They also provide a mechanism for the demonstrated ability of NMDA to decrease tyrosine hydroxylase activity and dopamine synthesis.
Herman, Krisztian; Szabó, László; Leopold, Loredana F; Chiş, Vasile; Leopold, Nicolae
2011-05-01
A new, simple, and effective approach for multianalyte sequential surface-enhanced Raman scattering (SERS) detection in a flow cell is reported. The silver substrate was prepared in situ by laser-induced photochemical synthesis. By focusing the laser on the 320 μm inner diameter glass capillary at 0.5 ml/min continuous flow of 1 mM silver nitrate and 10 mM sodium citrate mixture, a SERS active silver spot on the inner wall of the glass capillary was prepared in a few seconds. The test analytes, dacarbazine, 4-(2-pyridylazo)resorcinol (PAR) complex with Cu(II), and amoxicillin, were sequentially injected into the flow cell. Each analyte was adsorbed to the silver surface, enabling the recording of high intensity SERS spectra even at 2 s integration times, followed by desorption from the silver surface and being washed away from the capillary. Before and after each analyte passed the detection window, citrate background spectra were recorded, and thus, no "memory effects" perturbed the SERS detection. A good reproducibility of the SERS spectra obtained under flow conditions was observed. The laser-induced photochemically synthesized silver substrate enables high Raman enhancement, is characterized by fast preparation with a high success rate, and represents a valuable alternative for silver colloids as SERS substrate in flow approaches.
Glycosaminoglycan Chain of Dentin Sialoprotein Proteoglycan
Zhu, Q.; Sun, Y.; Prasad, M.; Wang, X.; Yamoah, A.K.; Li, Y.; Feng, J.; Qin, C.
2010-01-01
Dentin sialophosphoprotein (DSPP) is processed into dentin sialoprotein (DSP) and dentin phosphoprotein. A molecular variant of rat DSP, referred to as “HMW-DSP”, has been speculated to be a proteoglycan form of DSP. To determine if HMW-DSP is the proteoglycan form of DSP and to identify the glycosaminoglycan side-chain attachment site(s), we further characterized HMW-DSP. Chondroitinase ABC treatment reduced the migration rate for portions of rat HMW-DSP to the level of DSP. Disaccharide analysis showed that rat HMW-DSP contains glycosaminoglycan chains made of chondroitin-4-sulfate and has an average of 31-32 disaccharides/mol. These observations confirmed that HMW-DSP is the proteoglycan form of DSP (renamed “DSP-PG”). Edman degradation and mass spectrometric analyses of tryptic peptides from rat DSP-PG, along with substitution analyses of candidate Ser residues in mouse DSPP, confirmed that 2 glycosaminoglycan chains are attached to Ser241 and Ser253 in the rat, or Ser242 and Ser254 in the mouse DSPP sequence. PMID:20400719
Glycosaminoglycan chain of dentin sialoprotein proteoglycan.
Zhu, Q; Sun, Y; Prasad, M; Wang, X; Yamoah, A K; Li, Y; Feng, J; Qin, C
2010-08-01
Dentin sialophosphoprotein (DSPP) is processed into dentin sialoprotein (DSP) and dentin phosphoprotein. A molecular variant of rat DSP, referred to as "HMW-DSP", has been speculated to be a proteoglycan form of DSP. To determine if HMW-DSP is the proteoglycan form of DSP and to identify the glycosaminoglycan side-chain attachment site(s), we further characterized HMW-DSP. Chondroitinase ABC treatment reduced the migration rate for portions of rat HMW-DSP to the level of DSP. Disaccharide analysis showed that rat HMW-DSP contains glycosaminoglycan chains made of chondroitin-4-sulfate and has an average of 31-32 disaccharides/mol. These observations confirmed that HMW-DSP is the proteoglycan form of DSP (renamed "DSP-PG"). Edman degradation and mass spectrometric analyses of tryptic peptides from rat DSP-PG, along with substitution analyses of candidate Ser residues in mouse DSPP, confirmed that 2 glycosaminoglycan chains are attached to Ser(241) and Ser(253) in the rat, or Ser(242) and Ser(254) in the mouse DSPP sequence.
Millimeter-Sized Suspended Plasmonic Nanohole Arrays for Surface-Tension-Driven Flow-Through SERS
2015-01-01
We present metallic nanohole arrays fabricated on suspended membranes as an optofluidic substrate. Millimeter-sized suspended nanohole arrays were fabricated using nanoimprint lithography. We demonstrate refractive-index-based tuning of the optical spectra using a sucrose solution for the optimization of SERS signal intensity, leading to a Raman enhancement factor of 107. Furthermore, compared to dead-ended nanohole arrays, suspended nanohole arrays capable of flow-through detection increased the measured SERS signal intensity by 50 times. For directed transport of analytes, we present a novel methodology utilizing surface tension to generate spontaneous flow through the nanoholes with flow rates of 1 μL/min, obviating the need for external pumps or microfluidic interconnects. Using this method for SERS, we obtained a 50 times higher signal as compared to diffusion-limited transport and could detect 100 pM 4-mercaptopyridine. The suspended nanohole substrates presented herein possess a uniform and reproducible geometry and show the potential for improved analyte transport and SERS detection. PMID:25678744
Objects, Events and "to Be" Verbs in Spanish--An ERP Study of the Syntax-Semantics Interface
ERIC Educational Resources Information Center
Leone-Fernandez, Barbara; Molinaro, Nicola; Carreiras, Manuel; Barber, Horacio A.
2012-01-01
In Spanish, objects and events at subject position constrain the selection of different forms of the auxiliary verb "to be": locative predicates about objects require "estar en", while those relating to events require "ser en", both translatable as "to be in". Subjective ratings showed that while the "object + ser + en" is considered as incorrect,…
PHOTOMETRIC PROPERTIES FOR SELECTED ALGOL-TYPE BINARIES. II. AO SERPENTIS AND V338 HERCULIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y.-G.; Dai, H.-F.; Hu, S.-M.
2010-04-15
We present the first multiband photometry for the semidetached eclipsing binary AO Serpentis, observed on seven nights between 2009 April and July at the Weihai Observatory of Shandong University. By using the 2003 version of the Wilson-Devinney code, the photometric solutions of AO Ser and a similar object V338 Her were (re)deduced. The spectral types and orbital periods are A2 and P = 0.8793 days for AO Ser, F1V and P = 1.3057 days for V338 Her. The results reveal that two binaries are low mass ratio systems, whose secondary components fill their Roche lobes. The fill-out factors of themore » primary components are f = 58.6% for AO Ser and f = 54.2% for V338 Her, respectively. From the O - C curves of AO Ser and V338 Her, it is discovered that secular period changes with cyclic variations exist. The periods and semiamplitudes are 17.32({+-}0.01) yr and 0.0051({+-}0.0001) days for AO Ser, 29.07({+-}0.04) yr and 0.0116({+-}0.0015) days for V338 Her, respectively. This kind of cyclic oscillation may be attributed to either the light-time effect via an assumed third body or perhaps cyclic magnetic activity on the secondary component. For AO Ser, the long-term period decreases at a rate of dP/dt = -5.35({+-}0.03) x 10{sup -7} days yr{sup -1}, which may be caused by mass and angular momentum loss from the system. Considering the period decreasing, the fill-out factor of the primary for AO Ser will increase and it will finally fill its Roche lobe. Meanwhile, the secular period increase rate for V338 Her is dP/dt = +1.44({+-}0.24) x 10{sup -7} days yr{sup -1}, indicating that mass transfers from the less massive component to the more massive component. This will also cause the fill-out factor of the primary to increase. When the primaries fill their Roche lobes, AO Ser and V338 Her may evolve into contact stars, as predicted by the theory of thermal relaxation oscillations.« less
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Al Tanoury, Ziad; Schaffner-Reckinger, Elisabeth; Halavatyi, Aliaksandr; Hoffmann, Céline; Moes, Michèle; Hadzic, Ermin; Catillon, Marie; Yatskou, Mikalai; Friederich, Evelyne
2010-01-01
Background Initially detected in leukocytes and cancer cells derived from solid tissues, L-plastin/fimbrin belongs to a large family of actin crosslinkers and is considered as a marker for many cancers. Phosphorylation of L-plastin on residue Ser5 increases its F-actin binding activity and is required for L-plastin-mediated cell invasion. Methodology/Principal Findings To study the kinetics of L-plastin and the impact of L-plastin Ser5 phosphorylation on L-plastin dynamics and actin turn-over in live cells, simian Vero cells were transfected with GFP-coupled WT-L-plastin, Ser5 substitution variants (S5/A, S5/E) or actin and analyzed by fluorescence recovery after photobleaching (FRAP). FRAP data were explored by mathematical modeling to estimate steady-state reaction parameters. We demonstrate that in Vero cell focal adhesions L-plastin undergoes rapid cycles of association/dissociation following a two-binding-state model. Phosphorylation of L-plastin increased its association rates by two-fold, whereas dissociation rates were unaffected. Importantly, L-plastin affected actin turn-over by decreasing the actin dissociation rate by four-fold, increasing thereby the amount of F-actin in the focal adhesions, all these effects being promoted by Ser5 phosphorylation. In MCF-7 breast carcinoma cells, phorbol 12-myristate 13-acetate (PMA) treatment induced L-plastin translocation to de novo actin polymerization sites in ruffling membranes and spike-like structures and highly increased its Ser5 phosphorylation. Both inhibition studies and siRNA knock-down of PKC isozymes pointed to the involvement of the novel PKC-δ isozyme in the PMA-elicited signaling pathway leading to L-plastin Ser5 phosphorylation. Furthermore, the L-plastin contribution to actin dynamics regulation was substantiated by its association with a protein complex comprising cortactin, which is known to be involved in this process. Conclusions/Significance Altogether these findings quantitatively demonstrate for the first time that L-plastin contributes to the fine-tuning of actin turn-over, an activity which is regulated by Ser5 phosphorylation promoting its high affinity binding to the cytoskeleton. In carcinoma cells, PKC-δ signaling pathways appear to link L-plastin phosphorylation to actin polymerization and invasion. PMID:20169155
Smal, Caroline; Vertommen, Didier; Bertrand, Luc; Ntamashimikiro, Sandrine; Rider, Mark H; Van Den Neste, Eric; Bontemps, Françoise
2006-02-24
Deoxycytidine kinase (dCK) catalyzes the rate-limiting step of the deoxyribonucleoside salvage pathway in mammalian cells and plays a key role in the activation of numerous nucleoside analogues used in anti-cancer and antiviral chemotherapy. Although compelling evidence indicated that dCK activity might be regulated by phosphorylation/dephosphorylation, direct demonstration was lacking. Here we showed that dCK overexpressed in HEK 293T cells was labeled after incubating the cells with [32P]orthophosphate. Sorbitol, which was reported to decrease dCK activity, also decreased the labeling of dCK. These results indicated that dCK may exist as a phosphoprotein in vivo and that its activity can be correlated with its phosphorylation level. After purification of 32P-labeled dCK, digestion by trypsin, and analysis of the radioactive peptides by tandem mass spectrometry, the following four in vivo phosphorylation sites were identified: Thr-3, Ser-11, Ser-15, and Ser-74, the latter being the major phosphorylation site. Site-directed mutagenesis and use of an anti-phospho-Ser-74 antibody demonstrated that Ser-74 phosphorylation was crucial for dCK activity in HEK 293T cells, whereas phosphorylation of other identified sites did not seem essential. Phosphorylation of Ser-74 was also detected on endogenous dCK in leukemic cells, in which the Ser-74 phosphorylation state was increased by agents that enhanced dCK activity. Our study provided direct evidence that dCK activity can be controlled by phosphorylation in intact cells and highlights the importance of Ser-74 for dCK activity.
Xin, Yaping; Zhang, Dongming; Fu, Yanqin; Wang, Chongxian; Li, Qingju; Tian, Chenguang; Zhang, Suhe; Lyu, Xiaodong
2017-08-30
C1qTNF-related protein 1 (CTRP1) is independently associated with type 2 diabetes. However, the relationship between CTRP1 and insulin resistance is still not established. This study aimed to explore the role of CTRP1 under the situation of insulin resistance in adipose tissue. Plasma CTRP1 level was investigated in type 2 diabetic subjects (n = 35) and non-diabetic subjects (n = 35). The relationship between CTRP1 and phosphorylation of multi insulin receptor substrate 1 (IRS-1) serine (Ser) sites was further explored. Our data showed that Plasma CTRP1 was higher and negative correlation with insulin resistance in diabetic subjects (r = -0.283, p = 0.018). Glucose utilisation test revealed that the glucose utilisation rate of mature adipocytes was improved by CTRP1 in the presence of insulin. CTRP1 was not only related to IRS-1 protein, but also negatively correlated with IRS-1 Ser1101 phosphorylation (r = -0.398, p = 0.031). Furthermore, Phosphorylation levels of IRS-1 Ser1101 were significantly lower after incubation with 40 ng/mL CTRP1 in mature adipocytes than those with no intervention (p < 0.05). There was no significant correlation between CTRP1 and other IRS-1 serine sites (Ser302, Ser307, Ser612, Ser636/639, and Ser789). Collectively, our results suggested that CTRP1 might improve insulin resistance by reducing the phosphorylation of IRS-1 Ser1101, induced in the situation of insulin resistance as a feedback adipokine.
Li, Bing; Yu, Xiaohong; Gui, Suxin; Xie, Yi; Zhao, Xiaoyang; Hong, Jie; Sun, Qingqing; Sang, Xuezi; Sheng, Lei; Cheng, Zhe; Cheng, Jie; Hu, Rengping; Wang, Ling; Shen, Weide; Hong, Fashui
2014-06-01
Phoxim is a useful organophosphate (OP) pesticide used in agriculture in China, however, exposure to this pesticide can result in a significant reduction in cocooning in Bombyx mori (B. mori). Titanium dioxide nanoparticles (TiO2 NPs) have been shown to decrease phoxim-induced toxicity in B. mori; however, very little is known about the molecular mechanisms of silk gland damage due to OP exposure and repair of gland damage by TiO2 NP pretreatment. In the present study, exposure to phoxim resulted in a significant reduction in cocooning rate in addition to silk gland damage, whereas TiO2 NP attenuated phoxim-induced gland damage, increased the antioxidant capacity of the gland, and increased cocooning rate in B. mori. Furthermore, digital gene expression data suggested that phoxim exposure led to significant alterations in the expression of 833 genes. In particular, phoxim exposure caused significant down-regulation of Fib-L, Ser2, Ser3, and P25 genes involved in silk protein synthesis, and up-regulation of SFGH, UCH3, and Salhh genes involved in silk protein hydrolysis. A combination of both phoxim and TiO2 NP treatment resulted in marked changes in the expression of 754 genes, while treatment with TiO2 NPs led to significant alterations in the expression of 308 genes. Importantly, pretreatment with TiO2 NPs increased Fib-L, Ser2, Ser3, and P25 expression, and decreased SFGH, UCH3, and Salhh expression in silk protein in the silk gland under phoxim stress. Therefore, Fib-L, Ser2, Ser3, P25, SFGH, UCH3, and Salhh may be potential biomarkers of silk gland toxicity in B. mori caused by phoxim exposure. Copyright © 2013 Elsevier Ltd. All rights reserved.
Synthesis and activity of histidine-containing catalytic peptide dendrimers.
Delort, Estelle; Nguyen-Trung, Nhat-Quang; Darbre, Tamis; Reymond, Jean-Louis
2006-06-09
Peptide dendrimers built by iteration of the diamino acid dendron Dap-His-Ser (His = histidine, Ser = Serine, Dap = diamino propionic acid) display a strong positive dendritic effect for the catalytic hydrolysis of 8-acyloxypyrene 1,3,6-trisulfonates, which proceeds with enzyme-like kinetics in aqueous medium (Delort, E.; Darbre, T.; Reymond, J.-L. J. Am. Chem. Soc. 2004, 126, 15642-3). Thirty-two mutants of the original third generation dendrimer A3 ((Ac-His-Ser)8(Dap-His-Ser)4(Dap-His-Ser)2Dap-His-Ser-NH2) were prepared by manual synthesis or by automated synthesis with use of a Chemspeed PSW1100 peptide synthesizer. Dendrimer catalysis was specific for 8-acyloxypyrene 1,3,6-trisulfonates, and there was no activity with other types of esters. While dendrimers with hydrophobic residues at the core and histidine residues at the surface only showed weak activity, exchanging serine residues in dendrimer A3 against alanine (A3A), beta-alanine (A3B), or threonine (A3C) improved catalytic efficiency. Substrate binding was correlated with the total number of histidines per dendrimer, with an average of three histidines per substrate binding site. The catalytic rate constant kcat depended on the placement of histidines within the dendrimers and the nature of the other amino acid residues. The fastest catalyst was the threonine mutant A3C ((Ac-His-Thr)8(Dap-His-Thr)4(Dap-His-Thr)2Dap-His-Thr-NH2), with kcat = 1.3 min(-1), kcat/k(uncat) = 90'000, KM = 160 microM for 8-bytyryloxypyrene 1,3,6-trisulfonate, corresponding to a rate acceleration of 18'000 per catalytic site and a 5-fold improvement over the original sequence A3.
McSorley, Theresa; Ort, Stephan; Hazra, Saugata; Lavie, Arnon; Konrad, Manfred
2008-03-05
Intracellular phosphorylation of dCK on Ser-74 results in increased nucleoside kinase activity. We mimicked this phosphorylation by a Ser-74-Glu mutation in bacterially produced dCK and investigated kinetic parameters using various nucleoside substrates. The S74E mutation increases the k(cat) values 11-fold for dC, and 3-fold for the anti-cancer analogues dFdC and AraC. In contrast, the rate is decreased for the purine substrates. In HEK293 cells, we found that by comparing transiently transfected dCK(S74E)-GFP and wild-type dCK-GFP, mimicking the phosphorylation of Ser-74 has no effect on cellular localisation. We note that phosphorylation may represent a mechanism to enhance the catalytic activity of the relatively slow dCK enzyme.
Man, T K; Pease, A J; Winkler, M E
1997-06-01
The arrangement of the Escherichia coli serC (pdxF) and aroA genes into a cotranscribed multifunctional operon allows coregulation of two enzymes required for the biosynthesis of L-serine, pyridoxal 5'-phosphate, chorismate, and the aromatic amino acids and vitamins. RNase T2 protection assays revealed two major transcripts that were initiated from a promoter upstream from serC (pdxF). Between 80 to 90% of serC (pdxF) transcripts were present in single-gene mRNA molecules that likely arose by Rho-independent termination between serC (pdxF) and aroA. serC (pdxF)-aroA cotranscripts terminated at another Rho-independent terminator near the end of aroA. We studied operon regulation by determining differential rates of beta-galactosidase synthesis in a merodiploid strain carrying a single-copy lambda[phi(serC [pdxF]'-lacZYA)] operon fusion. serC (pdxF) transcription was greatest in bacteria growing in minimal salts-glucose medium (MMGlu) and was reduced in minimal salts-glycerol medium, enriched MMGlu, and LB medium. serC (pdxF) transcription was increased in cya or crp mutants compared to their cya+ crp+ parent in MMGlu or LB medium. In contrast, serC (pdxF) transcription decreased in an lrp mutant compared to its lrp+ parent in MMGlu. Conclusions obtained by using the operon fusion were corroborated by quantitative Western immunoblotting of SerC (PdxF), which was present at around 1,800 dimers per cell in bacteria growing in MMGlu. RNase T2 protection assays of serC (pdxF)-terminated and serC (pdxF)-aroA cotranscript amounts supported the conclusion that the operon was regulated at the transcription level under the conditions tested. Results with a series of deletions upstream of the P(serC (pdxF)) promoter revealed that activation by Lrp was likely direct, whereas repression by the cyclic AMP (cAMP) receptor protein-cAMP complex (CRP-cAMP) was likely indirect, possibly via a repressor whose amount or activity was stimulated by CRP-cAMP.
Subaihi, Abdu; Muhamadali, Howbeer; Mutter, Shaun T; Blanch, Ewan; Ellis, David I; Goodacre, Royston
2017-03-27
In this study surface enhanced Raman scattering (SERS) combined with the isotopic labelling (IL) principle has been used for the quantification of codeine spiked into both water and human plasma. Multivariate statistical approaches were employed for the analysis of these SERS spectral data, particularly partial least squares regression (PLSR) which was used to generate models using the full SERS spectral data for quantification of codeine with, and without, an internal isotopic labelled standard. The PLSR models provided accurate codeine quantification in water and human plasma with high prediction accuracy (Q 2 ). In addition, the employment of codeine-d 6 as the internal standard further improved the accuracy of the model, by increasing the Q 2 from 0.89 to 0.94 and decreasing the low root-mean-square error of predictions (RMSEP) from 11.36 to 8.44. Using the peak area at 1281 cm -1 assigned to C-N stretching, C-H wagging and ring breathing, the limit of detection was calculated in both water and human plasma to be 0.7 μM (209.55 ng mL -1 ) and 1.39 μM (416.12 ng mL -1 ), respectively. Due to a lack of definitive codeine vibrational assignments, density functional theory (DFT) calculations have also been used to assign the spectral bands with their corresponding vibrational modes, which were in excellent agreement with our experimental Raman and SERS findings. Thus, we have successfully demonstrated the application of SERS with isotope labelling for the absolute quantification of codeine in human plasma for the first time with a high degree of accuracy and reproducibility. The use of the IL principle which employs an isotopolog (that is to say, a molecule which is only different by the substitution of atoms by isotopes) improves quantification and reproducibility because the competition of the codeine and codeine-d 6 for the metal surface used for SERS is equal and this will offset any difference in the number of particles under analysis or any fluctuations in laser fluence. It is our belief that this may open up new exciting opportunities for testing SERS in real-world samples and applications which would be an area of potential future studies.
NASA Astrophysics Data System (ADS)
Volpon, Laurent; Tsan, Pascale; Majer, Zsuzsa; Vass, Elemer; Hollósi, Miklós; Noguéra, Valérie; Lancelin, Jean-Marc; Besson, Françoise
2007-08-01
Iturins are a group of antifungal produced by Bacillus subtilis. All are cyclic lipopeptides with seven α-amino acids of configuration LDDLLDL and one β-amino fatty acid. The bacillomycin L is a member of this family and its NMR structure was previously resolved using the sequence Asp-Tyr-Asn-Ser-Gln-Ser-Thr. In this work, we carefully examined the NMR spectra of this compound and detected an error in the sequence. In fact, Asp1 and Gln5 need to be changed into Asn1 and Glu5, which therefore makes it identical to bacillomycin Lc. As a consequence, it now appears that all iturinic peptides with antibiotic activity share the common β-amino fatty acid 8- L-Asn1- D-Tyr2- D-Asn3 sequence. To better understand the conformational influence of the acidic residue L-Asp1, present, for example in the inactive iturin C, the NMR structure of the synthetic analogue SCP [cyclo ( L-Asp1- D-Tyr2- D-Asn3- L-Ser4- L-Gln5- D-Ser6- L-Thr7-β-Ala8)] was determined and compared with bacillomycin Lc recalculated with the corrected sequence. In both cases, the conformers obtained were separated into two families of similar energy which essentially differ in the number and type of turns. A detailed analysis of both cyclopeptide structures is presented here. In addition, CD and FTIR spectra were performed and confirmed the conformational differences observed by NMR between both cyclopeptides.
Abe, Hiroyuki; Mori, Naoko; Tsuchiya, Keiko; Schacht, David V; Pineda, Federico D; Jiang, Yulei; Karczmar, Gregory S
2016-11-01
The purposes of this study were to evaluate diagnostic parameters measured with ultrafast MRI acquisition and with standard acquisition and to compare diagnostic utility for differentiating benign from malignant lesions. Ultrafast acquisition is a high-temporal-resolution (7 seconds) imaging technique for obtaining 3D whole-breast images. The dynamic contrast-enhanced 3-T MRI protocol consists of an unenhanced standard and an ultrafast acquisition that includes eight contrast-enhanced ultrafast images and four standard images. Retrospective assessment was performed for 60 patients with 33 malignant and 29 benign lesions. A computer-aided detection system was used to obtain initial enhancement rate and signal enhancement ratio (SER) by means of identification of a voxel showing the highest signal intensity in the first phase of standard imaging. From the same voxel, the enhancement rate at each time point of the ultrafast acquisition and the AUC of the kinetic curve from zero to each time point of ultrafast imaging were obtained. There was a statistically significant difference between benign and malignant lesions in enhancement rate and kinetic AUC for ultrafast imaging and also in initial enhancement rate and SER for standard imaging. ROC analysis showed no significant differences between enhancement rate in ultrafast imaging and SER or initial enhancement rate in standard imaging. Ultrafast imaging is useful for discriminating benign from malignant lesions. The differential utility of ultrafast imaging is comparable to that of standard kinetic assessment in a shorter study time.
McSorley, Theresa; Ort, Stephan; Hazra, Saugata; Lavie, Arnon; Konrad, Manfred
2009-01-01
Intracellular phosphorylation of dCK on Ser-74 results in increased nucleoside kinase activity. We mimicked this phosphorylation by a Ser-74-Glu mutation in bacterially produced dCK and investigated kinetic parameters using various nucleoside substrates. The S74E mutation increases the kcat values 11-fold for dC, and 3-fold for the anti-cancer analogues dFdC and AraC. In contrast, the rate is decreased for the purine substrates. In HEK293 cells, we found that by comparing transiently transfected dCK(S74E)-GFP and wild-type dCK-GFP, mimicking the phosphorylation of Ser-74 has no effect on cellular localisation. We note that phosphorylation may represent a mechanism to enhance the catalytic activity of the relatively slow dCK enzyme. PMID:18258203
1984-10-26
test for independence; ons i ser, -, of the poduct life estimator; dependent risks; 119 ASRACT Coniinue on ’wme-se f nereiary-~and iaen r~f> by Worst...the failure times associated with different failure - modes when we really should use a bivariate (or multivariate) distribution, then what is the...dependencies may be present, then what is the magnitude of the estimation error? S The third specific aim will attempt to obtain bounds on the
Myojo, Masahiro; Sawaki, Daigo; Iwata, Hiroshi; Kiyosue, Arihiro; Higashikuni, Yasutomi; Tanaka, Tomofumi; Fujita, Daishi; Ando, Jiro; Fujita, Hideo; Hirata, Yasunobu; Komuro, Issei
2015-01-01
Objectives This study aimed to assess the relation between stent edge restenosis (SER) and the distance from the stent edge to the residual plaque using quantitative intravascular ultrasound. Background Although percutaneous coronary intervention with drug-eluting stents has improved SER rates, determining an appropriate stent edge landing zone can be challenging in cases of diffuse plaque lesions. It is known that edge vascular response can occur within 2 mm from the edge of a bare metal stent, but the distance to the adjacent plaque has not been evaluated for drug-eluting stents. Methods A total of 97 proximal residual plaque lesions (plaque burden [PB] >40%) treated with everolimus-eluting stents were retrospectively evaluated to determine the distance from the stent edge to the residual plaque. Results The SER group had significantly higher PB (59.1 ± 6.1% vs. 51.9 ± 9.1% for non-SER; P = 0.04). Higher PB was associated with SER, with the cutoff value of 54.74% determined using receiver operating characteristic (ROC) curve analysis. At this cutoff value of PB, the distance from the stent edge to the lesion was significantly associated with SER (odds ratio = 2.05, P = 0.035). The corresponding area under the ROC curve was 0.725, and the cutoff distance value for predicting SER was 1.0 mm. Conclusion An interval less than 1 mm from the proximal stent edge to the nearest point with the determined PB cutoff value of 54.74% was significantly associated with SER in patients with residual plaque lesions. PMID:25775115
Takahashi, Masao; Miyazaki, Susumu; Myojo, Masahiro; Sawaki, Daigo; Iwata, Hiroshi; Kiyosue, Arihiro; Higashikuni, Yasutomi; Tanaka, Tomofumi; Fujita, Daishi; Ando, Jiro; Fujita, Hideo; Hirata, Yasunobu; Komuro, Issei
2015-01-01
This study aimed to assess the relation between stent edge restenosis (SER) and the distance from the stent edge to the residual plaque using quantitative intravascular ultrasound. Although percutaneous coronary intervention with drug-eluting stents has improved SER rates, determining an appropriate stent edge landing zone can be challenging in cases of diffuse plaque lesions. It is known that edge vascular response can occur within 2 mm from the edge of a bare metal stent, but the distance to the adjacent plaque has not been evaluated for drug-eluting stents. A total of 97 proximal residual plaque lesions (plaque burden [PB] >40%) treated with everolimus-eluting stents were retrospectively evaluated to determine the distance from the stent edge to the residual plaque. The SER group had significantly higher PB (59.1 ± 6.1% vs. 51.9 ± 9.1% for non-SER; P = 0.04). Higher PB was associated with SER, with the cutoff value of 54.74% determined using receiver operating characteristic (ROC) curve analysis. At this cutoff value of PB, the distance from the stent edge to the lesion was significantly associated with SER (odds ratio = 2.05, P = 0.035). The corresponding area under the ROC curve was 0.725, and the cutoff distance value for predicting SER was 1.0 mm. An interval less than 1 mm from the proximal stent edge to the nearest point with the determined PB cutoff value of 54.74% was significantly associated with SER in patients with residual plaque lesions.
Hinojosa, José A.; Rincón-Pérez, Irene; Romero-Ferreiro, Mª Verónica; Martínez-García, Natalia; Villalba-García, Cristina; Montoro, Pedro R.; Pozo, Miguel A.
2016-01-01
The current study presents ratings by 540 Spanish native speakers for dominance, familiarity, subjective age of acquisition (AoA), and sensory experience (SER) for the 875 Spanish words included in the Madrid Affective Database for Spanish (MADS). The norms can be downloaded as supplementary materials for this manuscript from https://figshare.com/s/8e7b445b729527262c88 These ratings may be of potential relevance to researches who are interested in characterizing the interplay between language and emotion. Additionally, with the aim of investigating how the affective features interact with the lexicosemantic properties of words, we performed correlational analyses between norms for familiarity, subjective AoA and SER, and scores for those affective variables which are currently included in the MADs. A distinct pattern of significant correlations with affective features was found for different lexicosemantic variables. These results show that familiarity, subjective AoA and SERs may have independent effects on the processing of emotional words. They also suggest that these psycholinguistic variables should be fully considered when formulating theoretical approaches to the processing of affective language. PMID:27227521
Naqvi, Tatheer; Warden, Andrew C.; French, Nigel; Sugrue, Elena; Carr, Paul D.; Jackson, Colin J.; Scott, Colin
2014-01-01
Phosphotriesterases (PTEs) have been isolated from a range of bacterial species, including Agrobcaterium radiobacter (PTEAr), and are efficient enzymes with broad substrate ranges. The turnover rate of PTEAr for the common organophosphorous insecticide malathion is lower than expected based on its physical properties; principally the pka of its leaving group. In this study, we rationalise the turnover rate of PTEAr for malathion using computational docking of the substrate into a high resolution crystal structure of the enzyme, suggesting that malathion is too large for the PTEAr binding pocket. Protein engineering through combinatorial active site saturation testing (CASTing) was then used to increase the rate of malathion turnover. Variants from a CASTing library in which Ser308 and Tyr309 were mutated yielded variants with increased activity towards malathion. The most active PTEAr variant carried Ser308Leu and Tyr309Ala substitutions, which resulted in a ca. 5000-fold increase in k cat/K M for malathion. X-ray crystal structures for the PTEAr Ser308Leu\\Tyr309Ala variant demonstrate that the access to the binding pocket was enhanced by the replacement of the bulky Tyr309 residue with the smaller alanine residue. PMID:24721933
Lazarov, Borislav; Swinnen, Rudi; Poelmans, David; Spruyt, Maarten; Goelen, Eddy; Covaci, Adrian; Stranger, Marianne
2016-09-01
The influence of the presence of the so-called seed particles on the emission rate of Tris (1-chloroisopropyl) phosphate (TCIPP) from polyisocyanurate (PIR) insulation boards was investigated in this study. Two Field and Laboratory Emission Test cells (FLEC) were placed on the surface of the same PIR board and respectively supplied with clean air (reference FLEC) and air containing laboratory-generated soot particles (test FLEC). The behavior of the area-specific emission rates (SER A ) over a time period of 10 days was studied by measuring the total (gas + particles) concentrations of TCIPP at the exhaust of each FLEC. The estimated SER A of TCIPP from the PIR board at the quasi-static equilibrium were found to be 0.82 μg m(-2) h(-1) in the absence of seed particles, while the addition of soot particles led to SER A of 2.16 μg m(-2) h(-1). This indicates an increase of the SER A of TCIPP from the PIR board with a factor of 3 in the presence of soot particles. The TCIPP partition coefficient to soot particles at the quasi-static equilibrium was 0.022 ± 0.012 m(3) μg(-1). In the next step, the influence of real-life particles on TCIPP emission rates was investigated by supplying the test FLEC with air from a professional kitchen where mainly frying and baking activities took place. Similar to the reference FLEC outcomes, SER A was also found to increase in this real-life experiment over a time period of 20 days by a factor 3 in the presence of particles generated during cooking activities. The median value of estimated particle-gas coefficient for this test was 0.062 ± 0.037 m(3) μg(-1).
NASA Astrophysics Data System (ADS)
Zhang, Shu; Tian, Xueli; Yin, Jun; Liu, Yu; Dong, Zhanmin; Sun, Jia-Lin; Ma, Wanyun
2016-04-01
Silver nanostructured films suitable for use as surface-enhanced Raman scattering (SERS) substrates are prepared in just 2 hours by the solid-state ionics method. By changing the intensity of the external direct current, we can readily control the surface morphology and growth rate of the silver nanostructured films. A detailed investigation of the surface enhancement of the silver nanostructured films using Rhodamine 6G (R6G) as a molecular probe revealed that the enhancement factor of the films was up to 1011. We used the silver nanostructured films as substrates in SERS detection of human red blood cells (RBCs). The SERS spectra of RBCs on the silver nanostructured film could be clearly detected at a laser power of just 0.05 mW. Comparison of the SERS spectra of RBCs obtained from younger and older donors showed that the SERS spectra depended on donor age. A greater proportion of the haemoglobin in the RBCs of older donors was in the deoxygenated state than that of the younger donors. This implies that haemoglobin of older people has lower oxygen-carrying capacity than that of younger people. Overall, the fabricated silver substrates show promise in biomedical SERS spectral detection.
An index of the literature for bimolecular gas phase cation-molecule reaction kinetics
NASA Technical Reports Server (NTRS)
Anicich, V. G.
2003-01-01
This is an index to the literature for gas phase bimolecular positive ionmolecule reactions. Over 2300 references are cited. Reaction rate coefficients and product distributions of the reactions are abstracted out of the original citations where available. This index is intended to cover the literature from 1936 to 2003. This is a continuation of several surveys: the original (Huntress Astrophys. J. Suppl. Ser., 33, 495 (1977)), an expansion (Anicich and Huntress, Astrophys. J. Suppl. Ser. 62, 553 (1986)), a supplement (Anicich, Astrophys. J. Suppl. Ser. 84, 215 (1993)), and an evaluation (Anicich, V. G. J. Phys. Chem. Ref. Data 22,1469 (1993b). The Table of reactions is listed by reactant ion.
[Effect of ginsenoside Rg3 on Pim-3 and Bad proteins in human pancreatic cancer cell line PANC-1].
Jian, Jie; Hu, Zhi-Fang; Huang, Yuan
2009-05-01
Ginsenoside Rg3 is a traditional Chinese medicine monomer which possesses anticancer effects. This study was to investigate the effects of ginsenoside Rg3 on Pim-3 and phosphorylated Bad (pBad) proteins, pBad (Ser112) and pBad (Ser136) in human pancreatic cancer cell line PANC-1. PANC-1 cells were exposed to 10, 20, 40 and 80 micromol/L ginsenoside Rg3 for 24 h. A short hairpin RNA (shRNA) of Pim-3 was cloned and inserted into a eukaryotic expression vector pSilencer 3.1-H1 Neo to construct pSilencer 3.1-H1 Neo-Pim-3. pSilencer 3.1-H1 Neo-Pim-3 was then transfected into PANC-1 cells. Cell proliferation was measured by MTT assay; cell apoptosis was observed under an invert microscope and measured by flow cytometry with Annexin V/PI staining; protein expressions of Pim-3, Bad, pBad (Ser112) and pBad (Ser136) were measured by Western blot. The inhibitory rates of 10, 20, 40 and 80 micromol/L ginsenoside Rg3 on PANC-1 cells were 20.2%, 33.4%, 52.8% and 65.3%, respectively. Typical morphological changes in apoptosis were induced by ginsenoside Rg3. The apoptotic rate of PANC-1 cells was significantly higher in the ginsenoside Rg3 treatment group (80 micromol/L) than in the control group (12.2% vs. 3.3%, P<0.05). Ginsenoside Rg3 had no influence on the total Bad protein expression, but decreased both Pim-3 and pBad (Ser112) expressions in a dose-dependent manner. pBad (Ser136) was not expressed in PANC-1 cells. Compared with the control group, the percentages of early and total apoptotic cells were significantly increased in PANC-1 cells transfected with pim-3-shRNA [(11.5+/-3.7)% vs. (5.8+/-2.2)%,P<0.01;(20.8+/-2.6)% vs.(13.0+/-4.1)%,P<0.05], while the expressions of pim-3 and pBad (Ser112) were both decreased. The anti-tumor effect of ginsenoside Rg3 may be associated with the decrease of Pim-3 and pBad (Ser112).
Azizoglu, Serap; Junghans, Barbara M; Barutchu, Ayla; Crewther, Sheila G
2011-01-01
Environmental factors associated with schooling systems in various countries have been implicated in the rising prevalence of myopia, making the comparison of prevalence of refractive errors in migrant populations of interest. This study aims to determine the prevalence of refractive errors in children of Middle Eastern descent, raised and living in urban Australia but actively maintaining strong ties to their ethnic culture, and to compare them with those in the Middle East where myopia prevalence is generally low. A total of 354 out of a possible 384 late primary/early secondary schoolchildren attending a private school attracting children of Middle Eastern background in Melbourne were assessed for refractive error and visual acuity. A Shin Nippon open-field NVision-K5001 autorefractor was used to carry out non-cycloplegic autorefraction while viewing a distant target. For statistical analyses students were divided into three age groups: 10-11 years (n = 93); 12-13 years (n = 158); and 14-15 years (n = 102). All children were bilingual and classified as of Middle Eastern (96.3 per cent) or Egyptian (3.7 per cent) origin. Ages ranged from 10 to 15 years, with a mean of 13.17 ± 0.8 (SEM) years. Mean spherical equivalent refraction (SER) for the right eye was +0.09 ± 0.07 D (SEM) with a range from -7.77 D to +5.85 D. The prevalence of myopia, defined as a spherical equivalent refraction 0.50 D or more of myopia, was 14.7 per cent. The prevalence of hyperopia, defined as a spherical equivalent refraction of +0.75 D or greater, was 16.4 per cent, while hyperopia of +1.50 D or greater was 5.4 per cent. A significant difference in SER was seen as a function of age; however, no significant gender difference was seen. This is the first study to report the prevalence of refractive errors for second-generation Australian schoolchildren coming from a predominantly Lebanese Middle Eastern Arabic background, who endeavour to maintain their ethnic ties. The relatively low prevalence of myopia is similar to that found for other metropolitan Australian school children but higher than that reported in the Middle East. These results suggest that lifestyle and educational practices may be a significant influence in the progression of myopic refractive errors. © 2010 The Authors. Clinical and Experimental Optometry © 2010 Optometrists Association Australia.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
Kang, Jeehoon; Cho, Young-Seok; Kim, Seong-Wook; Park, Jin Joo; Yoon, Yeonyee E; Oh, Il-Young; Yoon, Chang-Hwan; Suh, Jung-Won; Youn, Tae-Jin; Chae, In-Ho; Choi, Dong-Ju
2015-01-01
Despite the benefits of successful percutaneous coronary interventions (PCIs) for chronic total occlusion (CTO) lesions, PCIs of CTO lesions still carry a high rate of adverse events, including in-stent restenosis (ISR). Because previous reports have not specifically investigated the intravascular ultrasound (IVUS) predictors of ISR in CTO lesions, we focused on these predictors. We included 126 patients who underwent successful PCIs, using drug-eluting stents, and post-PCI IVUS of CTO lesions. Patient and lesion characteristics were analyzed to elucidate the ISR predictors. In each lesion, an average of 1.7 ± 0.7 (mean length, 46.4 ± 20.3 mm) stents were used. At 9 months follow-up, 14 (11%) patients demonstrated ISR, and 8 (6.3%) underwent target lesion revascularization. Multivariate logistic regression analysis showed that the independent predictors of ISR were the post-PCI minimal luminal diameter (MLD) and the stent expansion ratio (SER; minimal stent cross-sectional area (CSA) over the nominal CSA of the implanted stent), measured using quantitative coronary angiography (QCA) and IVUS, respectively. A receiver operating characteristic analysis indicated that the best post-PCI MLD and SER cut-off values for predicting ISR were 2.4 mm (area under the curve [AUC], 0.762; 95% confidence interval (CI), 0.639-0.885) and 70% (AUC, 0.714; 95% CI, 0.577-0.852), respectively. Lesions with post-PCI MLD and SER values less than these threshold values were at a higher risk of ISR, with an odds ratio of 23.3 (95% CI, 2.74-198.08), compared with lesions having larger MLD and SER values. Thus, the potential predictors of ISR, after PCI of CTO lesions, are the post-PCI MLD and SER values. The ISR rate was highest in lesions with a post-PCI MLD ≤2.4 mm and an SER ≤70%.
Enabling Advanced Automation in Spacecraft Operations with the Spacecraft Emergency Response System
NASA Technical Reports Server (NTRS)
Breed, Julie; Fox, Jeffrey A.; Powers, Edward I. (Technical Monitor)
2001-01-01
True autonomy is the Holy Grail of spacecraft mission operations. The goal of launching a satellite and letting it manage itself throughout its useful life is a worthy one. With true autonomy, the cost of mission operations would be reduced to a negligible amount. Under full autonomy, any problems (no matter the severity or type) that may arise with the spacecraft would be handled without any human intervention via some combination of smart sensors, on-board intelligence, and/or smart automated ground system. Until the day that complete autonomy is practical and affordable to deploy, incremental steps of deploying ever-increasing levels of automation (computerization of once manual tasks) on the ground and on the spacecraft are gradually decreasing the cost of mission operations. For example, NASA's Goddard Space Flight Center (NASA-GSFC) has been flying spacecraft with low cost operations for several years. NASA-GSFC's SMEX (Small Explorer) and MIDEX (Middle Explorer) missions have effectively deployed significant amounts of automation to enable the missions to fly predominately in 'light-out' mode. Under light-out operations the ground system is run without human intervention. Various tools perform many of the tasks previously performed by the human operators. One of the major issues in reducing human staff in favor of automation is the perceived increased in risk of losing data, or even losing a spacecraft, because of anomalous conditions that may occur when there is no one in the control center. When things go wrong, missions deploying advanced automation need to be sure that anomalous conditions are detected and that key personal are notified in a timely manner so that on-call team members can react to those conditions. To ensure the health and safety of its lights-out missions, NASA-GSFC's Advanced Automation and Autonomy branch (Code 588) developed the Spacecraft Emergency Response System (SERS). The SERS is a Web-based collaborative environment that enables secure distributed fault and resource management. The SERS incorporates the use of intelligent agents, threaded discussions, workflow, database connectivity, and links to a variety of communications devices (e.g., two-way paging, PDA's, and Internet phones) via commercial gateways. When the SERS detects a problem, it notifies on-call team members, who then can remotely take any necessary actions to resolve the anomalies.The SERS goes well beyond a simple '911' system that sends out an error code to everyone with a pager. Instead, SERS' software agents send detailed data (i.e., notifications) to the most appropriate team members based on the type and severity of the anomaly and the skills of the on-call team members. The SERS also allows the team members to respond to the notifications from their wireless devices. This unique capability ensures rapid response since the team members no longer have to go to a PC or the control center for every anomalous event. Most importantly, the SERS enables safe experimentation with various techniques for increasing levels of automation, leading to robust autonomy. For the MIDEX missions at NASA GSFC, the SERS is used to provide 'human-in-the-loop' automation. During lights-out operations, as greater control is given to the MIDEX automated systems, the SERS can be configured to page remote personnel and keep them informed regarding actions taking place in the control center. Remote off-duty operators can even be given the option of enabling or inhibiting a specific automated response in near real time via their two-way pagers. The SERS facilitates insertion of new technology to increase automation, while maintaining the safety and security of mission resources. This paper will focus on SERS' overall functionality and how SERS has been designed to handle the monitoring and emergency response for missions with varying levels of automation. The paper will also convey some of the key lessons learned from SERS' deployment across of variety of missions, highlighting this incremental approach to achieving 'robust autonomy'.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Bell, Steven E J; Sirimuthu, Narayana M S
2004-11-01
Rapid, quantitative SERS analysis of nicotine at ppm/ppb levels has been carried out using stable and inexpensive polymer-encapsulated Ag nanoparticles (gel-colls). The strongest nicotine band (1030 cm(-1)) was measured against d(5)-pyridine internal standard (974 cm(-1)) which was introduced during preparation of the stock gel-colls. Calibration plots of I(nic)/I(pyr) against the concentration of nicotine were non-linear but plotting I(nic)/I(pyr) against [nicotine](x)(x = 0.6-0.75, depending on the exact experimental conditions) gave linear calibrations over the range (0.1-10 ppm) with R(2) typically ca. 0.998. The RMS prediction error was found to be 0.10 ppm when the gel-colls were used for quantitative determination of unknown nicotine samples in 1-5 ppm level. The main advantages of the method are that the gel-colls constitute a highly stable and reproducible SERS medium that allows high throughput (50 sample h(-1)) measurements.
Using oral fluids samples for indirect influenza A virus surveillance in farmed UK pigs.
Gerber, Priscilla F; Dawson, Lorna; Strugnell, Ben; Burgess, Robert; Brown, Helen; Opriessnig, Tanja
2017-02-01
Influenza A virus (IAV) is economically important in pig production and has broad public health implications. In Europe, active IAV surveillance includes demonstration of antigen in nasal swabs and/or demonstration of antibodies in serum (SER) samples; however, collecting appropriate numbers of individual pig samples can be costly and labour-intensive. The objective of this study was to compare the probability of detecting IAV antibody positive populations using SER versus oral fluid (OF) samples. Paired pen samples, one OF and 5-14 SER samples, were collected cross-sectional or longitudinally. A commercial nucleoprotein (NP)-based blocking ELISA was used to test 244 OF and 1004 SER samples from 123 pens each containing 20-540 pigs located in 27 UK herds. Overall, the IAV antibody detection rate was higher in SER samples compared to OFs under the study conditions. Pig age had a significant effect on the probability of detecting positive pens. For 3-9-week-old pigs the probability of detecting IAV antibody positive samples in a pen with 95% confidence intervals was 40% (23-60) for OF and 61% (0.37-0.80) for SER ( P = 0.04), for 10-14-week-old pigs it was 19% (8-40) for OF and 93% (0.71-0.99) for SER ( P < 0.01), and for 18-20-week-old pigs it was 67% (41-85) for OF and 81% (0.63-0.91) for SER ( P = 0.05). Collecting more than one OF sample in pens with more than 25 less than 18-week-old pigs should be further investigated in the future to elucidate the suitability of OF for IAV surveillance in herds with large pen sizes.
Structural and Functional Adaptation of Vancomycin Resistance VanT Serine Racemases
Meziane-Cherif, Djalal; Stogios, Peter J.; Evdokimova, Elena; Egorova, Olga
2015-01-01
ABSTRACT Vancomycin resistance in Gram-positive bacteria results from the replacement of the d-alanyl–d-alanine target of peptidoglycan precursors with d-alanyl–d-lactate or d-alanyl–d-serine (d-Ala-d-Ser), to which vancomycin has low binding affinity. VanT is one of the proteins required for the production of d-Ala-d-Ser-terminating precursors by converting l-Ser to d-Ser. VanT is composed of two domains, an N-terminal membrane-bound domain, likely involved in l-Ser uptake, and a C-terminal cytoplasmic catalytic domain which is related to bacterial alanine racemases. To gain insight into the molecular function of VanT, the crystal structure of the catalytic domain of VanTG from VanG-type resistant Enterococcus faecalis BM4518 was determined. The structure showed significant similarity to type III pyridoxal 5′-phosphate (PLP)-dependent alanine racemases, which are essential for peptidoglycan synthesis. Comparative structural analysis between VanTG and alanine racemases as well as site-directed mutagenesis identified three specific active site positions centered around Asn696 which are responsible for the l-amino acid specificity. This analysis also suggested that VanT racemases evolved from regular alanine racemases by acquiring additional selectivity toward serine while preserving that for alanine. The 4-fold-lower relative catalytic efficiency of VanTG against l-Ser versus l-Ala implied that this enzyme relies on its membrane-bound domain for l-Ser transport to increase the overall rate of d-Ser production. These findings illustrate how vancomycin pressure selected for molecular adaptation of a housekeeping enzyme to a bifunctional enzyme to allow for peptidoglycan remodeling, a strategy increasingly observed in antibiotic-resistant bacteria. PMID:26265719
Structural and functional adaptation of vancomycin resistance VanT serine racemases
Meziane-Cherif, Djalal; Stogios, Peter J.; Evdokimova, Elena; ...
2015-08-11
Vancomycin resistance in Gram-positive bacteria results from the replacement of the D-alanyl–D-alanine target of peptidoglycan precursors with D-alanyl–D-lactate or D-alanyl–D-serine (D-Ala-D-Ser), to which vancomycin has low binding affinity. VanT is one of the proteins required for the production of D-Ala-D-Ser-terminating precursors by converting L-Ser to D-Ser. VanT is composed of two domains, an N-terminal membrane-bound domain, likely involved in L-Ser uptake, and a C-terminal cytoplasmic catalytic domain which is related to bacterial alanine racemases. To gain insight into the molecular function of VanT, the crystal structure of the catalytic domain of VanT G from VanG-type resistant Enterococcus faecalis BM4518more » was determined. The structure showed significant similarity to type III pyridoxal 5'-phosphate (PLP)-dependent alanine racemases, which are essential for peptidoglycan synthesis. Comparative structural analysis between VanT G and alanine racemases as well as site-directed mutagenesis identified three specific active site positions centered around Asn 696 which are responsible for theL-amino acid specificity. This analysis also suggested that VanT racemases evolved from regular alanine racemases by acquiring additional selectivity toward serine while preserving that for alanine. The 4-fold-lower relative catalytic efficiency of VanT G against L-Ser versus L-Ala implied that this enzyme relies on its membrane-bound domain for L-Ser transport to increase the overall rate of D-Ser production. These findings illustrate how vancomycin pressure selected for molecular adaptation of a housekeeping enzyme to a bifunctional enzyme to allow for peptidoglycan remodeling, a strategy increasingly observed in antibiotic-resistant bacteria.« less
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
An educational and audit tool to reduce prescribing error in intensive care.
Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D
2008-10-01
To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals
NASA Astrophysics Data System (ADS)
Goswami, S.; Flury, J.
2016-12-01
In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.
Study on nasopharyngeal cancer tissue using surface-enhanced Raman spectroscopy
NASA Astrophysics Data System (ADS)
Ge, Xiaosong; Lin, Xueliang; Xu, Zhihong; Wei, Guoqiang; Huang, Wei; Lin, Duo
2016-10-01
Surface-enhanced Raman spectroscopy (SERS) can provide detailed molecular structure and composition information, and has demonstrated great potential in biomedical filed. This spectroscopy technology has become one of the most important optical techniques in the early diagnosis of cancer. Nasopharyngeal cancer (NPC) is a malignant neoplasm arising in the nasopharyngeal epithelial lining, which has relatively high incidence and death rate in Southeast Asia and southern China. This paper reviews the current progress of SERS in the field of cancer diagnostics, including gastric cancer, colorectal cancer, cervical cancer and nasopharyngeal cancer. In addition to above researches, we recently develop a novel NPC detection method based on tissue section using SERS, and obtain primary results. The proposed method has promising potential for the detection of nasopharyngeal carcinoma.
NASA Astrophysics Data System (ADS)
El-Zahry, Marwa R.; Lendl, Bernhard
2018-03-01
A simple, fast and sensitive surface enhanced Raman spectroscopy (SERS) method for quantitative determination of fluoroquinolone antibiotic Ofloxacin (OFX) is presented. Also the stability behavior of OFX was investigated by monitoring the SERS spectra of OFX after various degradation processes. Acidic, basic and oxidative force degradation processes were applied at different time intervals. The forced degradation conditions were conducted and followed using SERS method utilizing silver nanoparticles (Ag NPs) as a SERS substrate. The Ag NPs colloids were prepared by reduction of silver nitrate using polyethyelene glycol (PEG) as a reducing and stabilizing agent. Validation tests were done in accordance with International Conference on Harmonization (ICH) guidelines. The calibration curve with a correlation coefficient (R = 0.9992) was constructed as a relationship between the concentration range of OFX (100-500 ng/ml) and SERS intensity at 1394 cm- 1 band. LOD and LOQ values were calculated and found to be 23.5 ng/ml and 72.6 ng/ml, respectively. The developed method was applied successfully for quantitation of OFX in different pharmaceutical dosage forms. Kinetic parameters were calculated including rate constant of the degradation of the studied antibiotic.
Haines, Ricci J; Corbin, Karen D; Pendleton, Laura C; Eichler, Duane C
2012-07-27
Endothelial nitric-oxide synthase (eNOS) utilizes l-arginine as its principal substrate, converting it to l-citrulline and nitric oxide (NO). l-Citrulline is recycled to l-arginine by two enzymes, argininosuccinate synthase (AS) and argininosuccinate lyase, providing the substrate arginine for eNOS and NO production in endothelial cells. Together, these three enzymes, eNOS, AS, and argininosuccinate lyase, make up the citrulline-NO cycle. Although AS catalyzes the rate-limiting step in NO production, little is known about the regulation of AS in endothelial cells beyond the level of transcription. In this study, we showed that AS Ser-328 phosphorylation was coordinately regulated with eNOS Ser-1179 phosphorylation when bovine aortic endothelial cells were stimulated by either a calcium ionophore or thapsigargin to produce NO. Furthermore, using in vitro kinase assay, kinase inhibition studies, as well as protein kinase Cα (PKCα) knockdown experiments, we demonstrate that the calcium-dependent phosphorylation of AS Ser-328 is mediated by PKCα. Collectively, these findings suggest that phosphorylation of AS at Ser-328 is regulated in accordance with the calcium-dependent regulation of eNOS under conditions that promote NO production and are in keeping with the rate-limiting role of AS in the citrulline-NO cycle of vascular endothelial cells.
Distinction of gastric cancer tissue based on surface-enhanced Raman spectroscopy
NASA Astrophysics Data System (ADS)
Ma, Jun; Zhou, Hanjing; Gong, Longjing; Liu, Shu; Zhou, Zhenghua; Mao, Weizheng; Zheng, Rong-er
2012-12-01
Gastric cancer is one of the most common malignant tumors with high recurrence rate and mortality rate in China. This study aimed to evaluate the diagnostic capability of Surface-enhanced Raman spectroscopy (SERS) based on gold colloids for distinguishing gastric tissues. Gold colloids were directly mixed with the supernatant of homogenized tissues to heighten the Raman signal of various biomolecule. A total of 56 samples were collected from normal (30) and cancer (26). Raman spectra were obtained with a 785nm excitation in the range of 600-1800 cm-1. Significant spectral differences in SERS mainly belong to nucleic acid, proteins and lipids, particularly in the range of 653, 726, 828, 963, 1004, 1032, 1088, 1130, 1243, 1369, 1474, 1596, 1723 cm-1. PCA-LDA algorithms with leave-one-patient-out cross validation yielded diagnostic sensitivities of 90% (27/30), specificities of 88.5% (23/26), and accuracy of 89.3% (50/56), for classification of normal and cancer tissues. The receiver operating characteristic (ROC) surface is 0.917, illustrating the diagnostic utility of SERS together with PCA-LDA to identify gastric cancer from normal tissue. This work demonstrated the SERS techniques can be useful for gastric cancer detection, and it is also a potential technique for accurately identifying cancerous tumor, which is of considerable clinical importance to real-time diagnosis.
Shacham, Sharon; Cheifetz, Maya N; Fridkin, Mati; Pawson, Adam J; Millar, Robert P; Naor, Zvi
2005-08-12
Type I gonadotropin-releasing hormone (GnRH) receptor (GnRHR) is unique among mammalian G-protein-coupled receptors (GPCRs) in lacking a C-terminal tail, which is involved in desensitization in GPCRs. Therefore, we searched for inhibitory sites in the intracellular loops (ICLs) of the GnRHR. Synthetic peptides corresponding to the three ICLs were inserted into permeabilized alphaT3-1 gonadotrope cells, and GnRH-induced inositol phosphate (InsP) formation was determined. GnRH-induced InsP production was potentiated by ICL2 > ICL3 but not by the ICL1 peptides, suggesting they are acting as decoy peptides. We examined the effects of six peptides in which only one of the Ser or Thr residues was substituted with Ala or Glu. Only substitution of Ser153 with Ala or Glu ablated the potentiating effect upon GnRH-induced InsP elevation. ERK activation was enhanced, and the rate of GnRH-induced InsP formation was about 6.5-fold higher in the first 10 min in COS-1 cells that were transfected with mutants of the GnRHR in which the ICL2 Ser/Thr residues (Ser151, Ser153, and Thr142) or only Ser153 was mutated to Ala as compared with the wild type GnRHR. The data indicate that ICL2 harbors an inhibitory domain, such that exogenous ICL2 peptide serves as a decoy for the inhibitory site (Ser153) of the GnRHR, thus enabling further activation. GnRH does not induce receptor phosphorylation in alphaT3-1 cells. Because the phosphomimetic ICL2-S153E peptide did not mimic the stimulatory effect of the ICL2 peptide, the inhibitory effect of Ser153 operates through a phosphorylation-independent mechanism.
In situ analysis of dynamic laminar flow extraction using surface-enhanced Raman spectroscopy
NASA Astrophysics Data System (ADS)
Wang, Fei; Wang, Hua-Lin; Qiu, Yang; Chang, Yu-Long; Long, Yi-Tao
2015-12-01
In this study, we performed micro-scale dynamic laminar flow extraction and site-specific in situ chloride concentration measurements. Surface-enhanced Raman spectroscopy was utilized to investigate the diffusion process of chloride ions from an oil phase to a water phase under laminar flow. In contrast to common logic, we used SERS intensity gradients of Rhodamine 6G to quantitatively calculate the concentration of chloride ions at specific positions on a microfluidic chip. By varying the fluid flow rates, we achieved different extraction times and therefore different chloride concentrations at specific positions along the microchannel. SERS spectra from the water phase were recorded at these different positions, and the spatial distribution of the SERS signals was used to map the degree of nanoparticle aggregation. The concentration of chloride ions in the channel could therefore be obtained. We conclude that this method can be used to explore the extraction behaviour and efficiency of some ions or molecules that enhance the SERS intensity in water or oil by inducing nanoparticle aggregation.
Reverendo, Marisa; Soares, Ana R; Pereira, Patrícia M; Carreto, Laura; Ferreira, Violeta; Gatti, Evelina; Pierre, Philippe; Moura, Gabriela R; Santos, Manuel A
2014-01-01
Mutations in genes that encode tRNAs, aminoacyl-tRNA syntheases, tRNA modifying enzymes and other tRNA interacting partners are associated with neuropathies, cancer, type-II diabetes and hearing loss, but how these mutations cause disease is unclear. We have hypothesized that levels of tRNA decoding error (mistranslation) that do not fully impair embryonic development can accelerate cell degeneration through proteome instability and saturation of the proteostasis network. To test this hypothesis we have induced mistranslation in zebrafish embryos using mutant tRNAs that misincorporate Serine (Ser) at various non-cognate codon sites. Embryo viability was affected and malformations were observed, but a significant proportion of embryos survived by activating the unfolded protein response (UPR), the ubiquitin proteasome pathway (UPP) and downregulating protein biosynthesis. Accumulation of reactive oxygen species (ROS), mitochondrial and nuclear DNA damage and disruption of the mitochondrial network, were also observed, suggesting that mistranslation had a strong negative impact on protein synthesis rate, ER and mitochondrial homeostasis. We postulate that mistranslation promotes gradual cellular degeneration and disease through protein aggregation, mitochondrial dysfunction and genome instability. PMID:25483040
NASA Astrophysics Data System (ADS)
Sallum, Loriz Francisco; Soares, Frederico Luis Felipe; Ardila, Jorge Armando; Carneiro, Renato Lajarim
2014-12-01
In this work, filter paper was used as a low cost substrate for silver nanoparticles in order to perform the detection and quantification of acetylsalicylic acid by SERS in a commercial tablet. The reaction conditions were 150 mM of ammonium hydroxide, 50 mM of silver nitrate, 500 mM of glucose, 12 min of the reaction time, 45 °C temperature, pretreatment with ammonium hydroxide and quantitative filter paper (1-2 μm). The average size of silver nanoparticles deposited on the paper substrate was 180 nm. Adsorption time of acetylsalicylic acid on the surface of the silver-coated filter paper was studied and an adsorption time of 80 min was used to build the analytical curve. It was possible to obtain a calibration curve with good precision with a coefficient of determination of 0.933. The method proposed in this work was capable to quantify acetylsalicylic acid in commercial tablets, at low concentration levels, with relative error of 2.06% compared to the HPLC. The preparation of filter paper coated with silver nanoparticles using Tollen's reagent presents several advantages such as low cost of synthesis, support and reagents; minimum amount of residuals, which are easily treated, despite the SERS spectroscopy presenting fast analysis, with low sample preparation and low amount of reactants as in HPLC analysis.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Levman, Jacob E D; Gallego-Ortiz, Cristina; Warner, Ellen; Causer, Petrina; Martel, Anne L
2016-02-01
Magnetic resonance imaging (MRI)-enabled cancer screening has been shown to be a highly sensitive method for the early detection of breast cancer. Computer-aided detection systems have the potential to improve the screening process by standardizing radiologists to a high level of diagnostic accuracy. This retrospective study was approved by the institutional review board of Sunnybrook Health Sciences Centre. This study compares the performance of a proposed method for computer-aided detection (based on the second-order spatial derivative of the relative signal intensity) with the signal enhancement ratio (SER) on MRI-based breast screening examinations. Comparison is performed using receiver operating characteristic (ROC) curve analysis as well as free-response receiver operating characteristic (FROC) curve analysis. A modified computer-aided detection system combining the proposed approach with the SER method is also presented. The proposed method provides improvements in the rates of false positive markings over the SER method in the detection of breast cancer (as assessed by FROC analysis). The modified computer-aided detection system that incorporates both the proposed method and the SER method yields ROC results equal to that produced by SER while simultaneously providing improvements over the SER method in terms of false positives per noncancerous exam. The proposed method for identifying malignancies outperforms the SER method in terms of false positives on a challenging dataset containing many small lesions and may play a useful role in breast cancer screening by MRI as part of a computer-aided detection system.
Reinke, Caroline E; Kelz, Rachel R; Pray, Lori; Williams, Noel; Bleier, Joshua; Murayama, Kenric; Morris, Jon B
2012-01-01
The Accreditation Council for Graduate Medical Education work rules have forced programs to critically appraise the overall educational value (OEV) of rotations. Successful rotations must satisfy Residency Review Committee mandates and optimize the service-to-education ratio (SER). This study was designed to examine the relationship between the OEV and SER and identify rotation characteristics (RC) associated with both. The Division of Surgery Education at the Hospital of the University of Pennsylvania administered a survey in FY2011 to all residents detailing resident perceptions regarding OEV, SER, and other RC. Responses were linked to additional rotation data. The relationship between OEV and SER was examined before and after controlling for significant RC identified in univariate analyses. Subgroup analyses by junior (CY1-2) and senior (CY3-5) resident status were performed. The survey was sent to 85 residents participating in 48 general surgery rotations, with an overall response rate of 87%. OEV was inversely proportional to SER. All RC were significant predictors of OEV in univariate models except rotation length, patient care participation and the presence of fellows. SER alone was a significant predictor of OEV (coefficient = -1.24, p < 0.001) and explained 68% of the variation in OEV. After including other RC, SER remained a significant predictor (coefficient = -1.08, p < 0.001) and the model explained 85% of the variation in OEV. In subgroup analysis, SER remained a significant predictor of OEV for junior residents (coefficient = -1.27, p = < 0.001), but not for senior residents (coefficient = -0.46, p = 0.15). The SER is inversely correlated with the OEV of general surgery rotations for the aggregate group of surgical residents, but this relationship appears to be attenuated by other factors in the senior resident group. Identification of the factors that affect junior surgical residents may provide the ability to improve the SER for junior residents and allow for significant improvements in perceived OEV for the resident body as a whole. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
Effect of bar-code technology on the safety of medication administration.
Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K
2010-05-06
Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society
Development and implementation of a human accuracy program in patient foodservice.
Eden, S H; Wood, S M; Ptak, K M
1987-04-01
For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.
Development of Standard Methods of Testing and Analyzing Fatigue Crack Growth Rate Data
1978-05-01
nitrogen cooled cryostat; high temperature tests were conducted using resistance heating tapes . An automatic controller maintained test temperatures...Cracking," Int. J. Fracture, Vol. 9, 1973, pp. 63-74. 87. P. Paris and F. Erdogan , "A Critical Analysis of Crack Propagation Laws," Trans. ASME, Ser. D: J...requirements of Sec. 7.2 and Appendix B. 200 REFERENCES 1. P. C. Paris and F. Erdogan , "A Critical Analysis of Crack Propagation Laws", Trans. ASME, Ser. D: 3
The influence of the structure and culture of medical group practices on prescription drug errors.
Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer
2005-08-01
This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.
Mahesh Kumar, Koratagere Nagaraju; Ramu, Periasamy; Rajan, Subramanian; Shewade, Deepak Gopal; Balachander, Jayaraman; Adithan, Chandrasekaran
2008-11-01
Beta-blockers show interindividual and interethnic variability in their response. Such variability might be due to the polymorphic variations in the beta1 adrenergic receptor genes viz, Ser49Gly and Arg389Gly. The study evaluated the influence of Ser49Gly and Arg389Gly polymorphisms on the cardiovascular responses to metoprolol in a South Indian population. Forty-one genetically prescreened healthy male volunteers participated in the study. They were divided on the basis of genotype of each polymorphism: Ser49Ser, Ser49Gly, and Gly49Gly and Arg389Arg, Arg389Gly, and Gly389Gly. They were also grouped into combination genotypes viz, S49S R389R, S49G R389R, G49G R389R, S49S R389G, S49S G389G, and S49G R389G. They were subjected to treadmill exercise testing, and cardiovascular parameters were measured before and after metoprolol administration. Metoprolol concentration was determined by reversed phase high-performance liquid chromatography method. The diastolic blood pressure (DBP) was significantly lower in S49S/G389G group when compared to S49S/A389A group. The cardiac parameters were significantly increased in all the genotype groups during treadmill exercise test done for a period of 9 minutes. During predrug treadmill exercise at the end of third and sixth minute, Gly49Gly showed a higher increase in heart rate and volume of oxygen consumption compared to Ser49Ser. Same group showed a higher increase of volume of oxygen consumption at the end of ninth minute of exercise compared to the Ser49Ser. Systolic and diastolic blood pressures were not different between Ser49Gly polymorphisms. However, there was no statistical difference between the genotype groups of both polymorphisms at any stage of post-drug treadmill exercise. The analysis of combination of genotypes showed no significant difference during predrug and postdrug exercise testing. The increase in cardiac responses to treadmill test was influenced by Ser49Gly polymorphism. Nevertheless, the above polymorphisms did not alter the beta-blocker response during treadmill exercise in South Indian population.
Emergency department discharge prescription errors in an academic medical center
Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.
2017-01-01
This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-01-01
Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033
Foolad, Negar; Shi, Vivian Y; Prakash, Neha; Kamangar, Faranak; Sivamani, Raja K
2015-06-16
Rosacea and melasma are two common skin conditions in dermatology. Both conditions have a predilection for the centrofacial region where the sebaceous gland density is the highest. However it is not known if sebaceous function has an association with these conditions. We aimed to assess the relationship between facial glabellar wrinkle severity and facial sebum excretion rate for individuals with rosacea, melasma, both conditions, and in those with rhytides. Secondly, the purpose of this study was to utilize high resolution 3D facial modeling and measurement technology to obtain information regarding glabellar rhytid count and severity. A total of 21 subjects participated in the study. Subjects were divided into four groups based on facial features: rosacea-only, melasma-only, rosacea and melasma, rhytides-only. A high resolution facial photograph was taken followed by measurement of facial sebum excretion rate (SER). The SER was found to decline with age and with the presence of melasma. The SER negatively correlated with increasing Wrinkle Severity Rating Scale. Through the use of 3D facial modeling and skin analysis technology, we found a positive correlation between clinically based grading scores and computer generated glabellar rhytid count and severity. Continuing research with facial modeling and measurement systems will allow for development of more objective facial assessments. Future studies need to assess the role of technology in stratifying the severity and subtypes of rosacea and melasma. Furthermore, the role of sebaceous regulation may have important implications in photoaging.
Dispensing error rate after implementation of an automated pharmacy carousel system.
Oswald, Scott; Caldwell, Richard
2007-07-01
A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.
Plasmonic gold nanostar for biomedical sensing
NASA Astrophysics Data System (ADS)
Liu, Yang; Yuan, Hsiangkuo; Fales, Andrew M.; Vo-Dinh, Tuan
2014-03-01
Cancer has become one of most significant death reasons and causes approximately 7.9 million human deaths worldwide each year. The challenge to detect cancer at an early stage makes cancer-related biomarkers sensing attract more and more research interest and efforts. Surface-enhanced Raman scattering (SERS) provides a promising method for various biomarkers (DNA, RNA, protein, et al.) detection due to its high sensitivity, specificity and capability for multiple analytes detection. Raman spectroscopy is a non-destructive photon-scattering technique, which provides molecule-specific information on molecular vibrational energy levels. SERS takes advantage of plasmonic effects and can enhance Raman signal up to 1015 at "hot spots". Due to its excellent sensitivity, SERS has been capable of achieving single-molecule detection limit. Local pH environment has been identified to be a potential biomarker for cancer diagnosis since solid cancer contains highly acidic environments. A near-infrared (NIR) SERS nanoprobe based on gold nanostars for pH sensing is developed for future cancer detection. Near-infrared (NIR) light is more suitable for in vivo applications because of its low attenuation rate and tissue auto fluorescence. SERS spectrum of pH reporter under various pH environments is monitored and used for pH sensing. Furthermore, density functional theory (DFT) calculation is performed to investigate Raman spectra changes with pH at the molecular level. The study demonstrates that SERS is a sensitive tool to monitor minor molecular structural changes due to local pH environment for cancer detection.
Free-surface microfluidics for detection of airborne explosives
NASA Astrophysics Data System (ADS)
Meinhart, Carl; Piorek, Brian; Banerjee, Sanjoy; Lee, Seung Joon; Moskovits, Martin
2008-11-01
A novel microfluidic, remote-sensing, chemical detection platform has been developed for real-time sensing of airborne agents. The key enabling technology is a newly developed concept termed Free-Surface Fluidics (FSF), where one or more fluidic surfaces of a microchannel flow are confined by surface tension and exposed to the surrounding atmosphere. The result is a unique open channel flow environment that is driven by pressure through surface tension, and not subject to body forces, such as gravity. Evaporation and flow rates are controlled by microchannel geometry, surface chemistry and precisely-controlled temperature profiles. The free-surface fluidic architecture is combined with Surface-Enhanced Raman Spectroscopy (SERS) to allow for real-time profiling of atmospheric species and detection of airborne agents. The aggregation of SERS nanoparticles is controlled using microfluidics, to obtain dimer nanoparticle clusters at known streamwise positions in the microchannel. These dimers form SERS hot-spots, which amplify the Raman signal by 8 -- 10 orders of magnitude. Results indicate that explosive agents such as DNT, TNT, RDX, TATP and picric acid in the surrounding atmosphere can be readily detected by the SERS system. Due to the amplification of the SERS system, explosive molecules with concentrations of parts per trillion can be detected, even in the presence of interferent molecules having six orders of magnitude higher concentration.
Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems
NASA Astrophysics Data System (ADS)
El-Ghandour, Osama M.; Saha, Debabrata
1991-05-01
A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
Executive Council lists and general practitioner files
Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.
1974-01-01
An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
Li, Xiaozhou; Yang, Tianyue; Li, Caesar Siqi; Song, Youtao; Lou, Hong; Guan, Dagang; Jin, Lili
2018-01-01
In this paper, we discuss the use of a procedure based on polymerase chain reaction (PCR) and surface enhanced Raman spectroscopy (SERS) (PCR-SERS) to detect DNA mutations. Methods: This method was implemented by first amplifying DNA-containing target mutations, then by annealing probes, and finally by applying SERS detection. The obtained SERS spectra were from a mixture of fluorescence tags labeled to complementary sequences on the mutant DNA. Then, the SERS spectra of multiple tags were decomposed to component tag spectra by multiple linear regression (MLR). Results: The detection limit was 10-11 M with a coefficient of determination (R2) of 0.88. To demonstrate the applicability of this process on real samples, the PCR-SERS method was applied on blood plasma taken from 49 colorectal cancer patients to detect six mutations located at the BRAF, KRAS, and PIK3CA genes. The mutation rates obtained by the PCR-SERS method were in concordance with previous research. Fisher's exact test showed that only two detected mutations at BRAF (V600E) and PIK3CA (E542K) were significantly positively correlated with right-sided colon cancer. No other clinical feature such as gender, age, cancer stage, or differentiation was correlated with mutation (V600E at BRAF, G12C, G12D, G12V, G13D at KRAS, and E542K at PIK3CA). Visually, a dendrogram drawn through hierarchical clustering analysis (HCA) supported the results of Fisher's exact test. The clusters drawn by all six mutations did not conform to the distributions of cancer stages, differentiation or cancer positions. However, the cluster drawn by the two mutations of V600E and E542K showed that all samples with those mutations belonged to the right-sided colon cancer group. Conclusion: The suggested PCR-SERS method is multiplexed, flexible in probe design, easy to incorporate into existing PCR conditions, and was sensitive enough to detect mutations in blood plasma. PMID:29556349
NASA Astrophysics Data System (ADS)
Petruš, Ondrej; Oriňak, Andrej; Oriňaková, Renáta; Orságová Králová, Zuzana; Múdra, Erika; Kupková, Miriam; Kovaľ, Karol
2017-11-01
Two types of metallised nanocavities (single and hybrid) were fabricated by colloid lithography followed by electrochemical deposition of Ni and subsequently Ag layers. Introductory Ni deposition step iniciates more homogenous decoration of nanocavities with Ag nanoparticles. Silver nanocavity decoration has been so performed with lower nucleation rate and with Ag nanoparticles homogeinity increase. By this, two step Ni and Ag deposition trough polystyrene nanospheres (100, 300, 500, 700, 900 nm), the various Ag surfaces were obtained. Ni layer formation in the first step of deposition enabled more precise controlling of Ag film deposition and thus final Ag surface morphology. Prepared substrates were tested as active surfaces in SERS application. The best SERS signal enhancement was observed at 500 nm Ag nanocavities with normalised thickness Ni layer ∼0.5. Enhancement factor has been established at value 1.078 × 1010; time stability was determined within 13 weeks; charge distribution at nanocavity Ag surfaces as well as reflection spectra were calculated by FDTD method. Newly prepared nanocavity surface can be applied in SERS analysis, predominantly.
In situ analysis of dynamic laminar flow extraction using surface-enhanced Raman spectroscopy
Wang, Fei; Wang, Hua-Lin; Qiu, Yang; Chang, Yu-Long; Long, Yi-Tao
2015-01-01
In this study, we performed micro-scale dynamic laminar flow extraction and site-specific in situ chloride concentration measurements. Surface-enhanced Raman spectroscopy was utilized to investigate the diffusion process of chloride ions from an oil phase to a water phase under laminar flow. In contrast to common logic, we used SERS intensity gradients of Rhodamine 6G to quantitatively calculate the concentration of chloride ions at specific positions on a microfluidic chip. By varying the fluid flow rates, we achieved different extraction times and therefore different chloride concentrations at specific positions along the microchannel. SERS spectra from the water phase were recorded at these different positions, and the spatial distribution of the SERS signals was used to map the degree of nanoparticle aggregation. The concentration of chloride ions in the channel could therefore be obtained. We conclude that this method can be used to explore the extraction behaviour and efficiency of some ions or molecules that enhance the SERS intensity in water or oil by inducing nanoparticle aggregation. PMID:26687436
Métrich, Mélanie; Mehmeti, Fortesa; Feliciano, Helene; Martin, David; Regamey, Julien; Tozzi, Piergiorgio; Meyer, Philippe; Hullin, Roger
Maximal exercise capacity after heart transplantion (HTx) is reduced to the 50-70% level of healthy controls when assessed by cardiopulmonary exercise testing (CPET) despite of normal left ventricular function of the donor heart. This study investigates the role of donor heart β1 and β2- adrenergic receptor (AR) polymorphisms for maximal exercise capacity after orthotopic HTx. CPET measured peak VO2 as outcome parameter for maximal exercise in HTx recipients ≥9 months and ≤4 years post-transplant (n = 41; mean peak VO2: 57±15% of predicted value). Donor hearts were genotyped for polymorphisms of the β1-AR (Ser49Gly, Arg389Gly) and the β2-AR (Arg16Gly, Gln27Glu). Circumferential shortening of the left ventricle was measured using magnetic resonance based CSPAMM tagging. Peak VO2 was higher in donor hearts expressing the β1-Ser49Ser alleles when compared with β1-Gly49 carriers (60±15% vs. 47±10% of the predicted value; p = 0.015), and by trend in cardiac allografts with the β1-AR Gly389Gly vs. β1-Arg389 (61±15% vs. 54±14%, p = 0.093). Peak VO2 was highest for the haplotype Ser49Ser-Gly389, and decreased progressively for Ser49Ser-Arg389Arg > 49Gly-389Gly > 49Gly-Arg389Arg (adjusted R2 = 0.56, p = 0.003). Peak VO2 was not different for the tested β2-AR polymorphisms. Independent predictors of peak VO2 (adjusted R2 = 0.55) were β1-AR Ser49Gly SNP (p = 0.005), heart rate increase (p = 0.016), and peak systolic blood pressure (p = 0.031). Left ventricular (LV) motion kinetics as measured by cardiac MRI CSPAMM tagging at rest was not different between carriers and non-carriers of the β1-AR Gly49allele. Similar LV cardiac motion kinetics at rest in donor hearts carrying either β1-AR Gly49 or β1-Ser49Ser variant suggests exercise-induced desensitization and down-regulation of the β1-AR Gly49 variant as relevant pathomechanism for reduced peak VO2 in β1-AR Gly49 carriers.
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-05-01
Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Astrophysics Data System (ADS)
Rajesh, Y.; Sangani, L. D. Varma; Shaik, Ummar Pasha; Gaur, Anshu; Mohiddon, Md Ahamad; Krishna, M. Ghanashyam
2017-05-01
The role of dielectric surrounding over the Au nanostructure for surface plasmon resonance (SPR) behavior is investigated by scanning near field optical microscopy (SNOM). The observed optical field strengths are correlated with the surface enhanced Raman scattering (SERS) enhancement recorded for R6G molecule. Discontinuous nanostructured Au thin films are deposited by RF magnatron sputtering at very low rate on to three different dielectric substrates, ZnO, TiO2 and SiO2. These three Au/dielectric nanostructures are investigated using SNOM by illuminating it in near field and collecting in transmission far field configuration. The observed optical near field images of the three different nanostructures are discussed by taking their dielectric constant into the account. The SERS enhancements are correlated with the optical field strengths derived from the near field optical imaging.
Pérez-Lara, Ángel; Thapa, Anusa; Nyenhuis, Sarah B; Nyenhuis, David A; Halder, Partho; Tietzel, Michael; Tittmann, Kai; Cafiso, David S; Jahn, Reinhard
2016-01-01
The Ca2+-sensor synaptotagmin-1 that triggers neuronal exocytosis binds to negatively charged membrane lipids (mainly phosphatidylserine (PtdSer) and phosphoinositides (PtdIns)) but the molecular details of this process are not fully understood. Using quantitative thermodynamic, kinetic and structural methods, we show that synaptotagmin-1 (from Rattus norvegicus and expressed in Escherichia coli) binds to PtdIns(4,5)P2 via a polybasic lysine patch in the C2B domain, which may promote the priming or docking of synaptic vesicles. Ca2+ neutralizes the negative charges of the Ca2+-binding sites, resulting in the penetration of synaptotagmin-1 into the membrane, via binding of PtdSer, and an increase in the affinity of the polybasic lysine patch to phosphatidylinositol-4,5-bisphosphate (PtdIns(4,5)P2). These Ca2+-induced events decrease the dissociation rate of synaptotagmin-1 membrane binding while the association rate remains unchanged. We conclude that both membrane penetration and the increased residence time of synaptotagmin-1 at the plasma membrane are crucial for triggering exocytotic membrane fusion. DOI: http://dx.doi.org/10.7554/eLife.15886.001 PMID:27791979
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
Design Techniques for Power-Aware Combinational Logic SER Mitigation
NASA Astrophysics Data System (ADS)
Mahatme, Nihaar N.
The history of modern semiconductor devices and circuits suggests that technologists have been able to maintain scaling at the rate predicted by Moore's Law [Moor-65]. With improved performance, speed and lower area, technology scaling has also exacerbated reliability issues such as soft errors. Soft errors are transient errors that occur in microelectronic circuits due to ionizing radiation particle strikes on reverse biased semiconductor junctions. These radiation induced errors at the terrestrial-level are caused due to radiation particle strikes by (1) alpha particles emitted as decay products of packing material (2) cosmic rays that produce energetic protons and neutrons, and (3) thermal neutrons [Dodd-03], [Srou-88] and more recently muons and electrons [Ma-79] [Nara-08] [Siew-10] [King-10]. In the space environment radiation induced errors are a much bigger threat and are mainly caused by cosmic heavy-ions, protons etc. The effects of radiation exposure on circuits and measures to protect against them have been studied extensively for the past 40 years, especially for parts operating in space. Radiation particle strikes can affect memory as well as combinational logic. Typically when these particles strike semiconductor junctions of transistors that are part of feedback structures such as SRAM memory cells or flip-flops, it can lead to an inversion of the cell content. Such a failure is formally called a bit-flip or single-event upset (SEU). When such particles strike sensitive junctions part of combinational logic gates they produce transient voltage spikes or glitches called single-event transients (SETs) that could be latched by receiving flip-flops. As the circuits are clocked faster, there are more number of clocking edges which increases the likelihood of latching these transients. In older technology generations the probability of errors in flip-flops due to SETs being latched was much lower compared to direct strikes on flip-flops or SRAMs leading to SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.
Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.
ERIC Educational Resources Information Center
Anderson, Richard C.; And Others
A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…
Estimating genotype error rates from high-coverage next-generation sequence data.
Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil
2014-11-01
Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods. © 2014 Wall et al.; Published by Cold Spring Harbor Laboratory Press.
Zhang, Xiaoqun; Mantas, Ioannis; Alvarsson, Alexandra; Yoshitake, Takashi; Shariatgorji, Mohammadreza; Pereira, Marcela; Nilsson, Anna; Kehr, Jan; Andrén, Per E; Millan, Mark J; Chergui, Karima; Svenningsson, Per
2018-01-01
The trace amine-associated receptor 1 (TAAR1) is expressed by dopaminergic neurons, but the precise influence of trace amines upon their functional activity remains to be fully characterized. Here, we examined the regulation of tyrosine hydroxylase (TH) by tyramine and beta-phenylethylamine (β-PEA) compared to 3-iodothyronamine (T 1 AM). Immunoblotting and amperometry were performed in dorsal striatal slices from wild-type (WT) and TAAR1 knockout (KO) mice. T 1 AM increased TH phosphorylation at both Ser 19 and Ser 40 , actions that should promote functional activity of TH. Indeed, HPLC data revealed higher rates of L-dihydroxyphenylalanine (DOPA) accumulation in WT animals treated with T 1 AM after the administration of a DOPA decarboxylase inhibitor. These effects were abolished both in TAAR1 KO mice and by the TAAR1 antagonist, EPPTB. Further, they were specific inasmuch as Ser 845 phosphorylation of the post-synaptic GluA1 AMPAR subunit was unaffected. The effects of T 1 AM on TH phosphorylation at both Ser 19 (CamKII-targeted), and Ser 40 (PKA-phosphorylated) were inhibited by KN-92 and H-89, inhibitors of CamKII and PKA respectively. Conversely, there was no effect of an EPAC analog, 8-CPT-2Me-cAMP, on TH phosphorylation. In line with these data, T 1 AM increased evoked striatal dopamine release in TAAR1 WT mice, an action blunted in TAAR1 KO mice and by EPPTB. Mass spectrometry imaging revealed no endogenous T 1 AM in the brain, but detected T 1 AM in several brain areas upon systemic administration in both WT and TAAR1 KO mice. In contrast to T 1 AM, tyramine decreased the phosphorylation of Ser 40 -TH, while increasing Ser 845 -GluA1 phosphorylation, actions that were not blocked in TAAR1 KO mice. Likewise, β-PEA reduced Ser 40 -TH and tended to promote Ser 845 -GluA1 phosphorylation. The D 1 receptor antagonist SCH23390 blocked tyramine-induced Ser 845 -GluA1 phosphorylation, but had no effect on tyramine- or β-PEA-induced Ser 40 -TH phosphorylation. In conclusion, by intracellular cascades involving CaMKII and PKA, T 1 AM, but not tyramine and β-PEA, acts via TAAR1 to promote the phosphorylation and functional activity of TH in the dorsal striatum, supporting a modulatory influence on dopamine transmission.
Speech Errors across the Lifespan
ERIC Educational Resources Information Center
Vousden, Janet I.; Maylor, Elizabeth A.
2006-01-01
Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…
Computer calculated dose in paediatric prescribing.
Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C
2005-01-01
Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.
The Interaction of Statistics and Geology -- Finite Deformations.
1980-11-01
UNCLASSIFIlED TR-178-SER-2 N 7 DIZO9 f l l ff-63f ~ l f f PRNEO VN TO TTSISF61/ EOM.’..lN 11 1 .1 2I " IIIj.5IIHL4. 1. 111 1----_III MICROCOPY RESOLUTION TEST ... Ramsey (1967). These might be the result of a sequence of linear deformations or homogeneous strains. In this section we summarize the description of...problem may be found in textbooks (see e.g. Theil (1971)) on Econometrics : y=B&+f, x=&+e where the errors of measurement e and f of x and y are
Shimojima, Keiko; Higashiguchi, Takafumi; Kishimoto, Kanako; Miyatake, Satoko; Miyake, Noriko; Takanashi, Jun-ichi; Matsumoto, Naomichi; Yamamoto, Toshiyuki
2017-01-01
The mitochondrial aspartyl-tRNA synthetase 2 gene (DARS2) is responsible for leukoencephalopathy with brainstem and spinal cord involvement and lactate elevation (LBSL). A Japanese patient with LBSL showed compound heterozygous DARS2 mutations c.358_359delinsTC (p.Gly120Ser) and c.228-15C>G (splicing error). This provides further evidence that most patients with LBSL show compound heterozygous mutations in DARS2 in association with a common splicing mutation in the splicing acceptor site of intron 2. PMID:29138691
Angular rate optimal design for the rotary strapdown inertial navigation system.
Yu, Fei; Sun, Qian
2014-04-22
Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.
Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.
2010-01-01
We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603
Is Human Oxoguanine Glycosylase 1 Genetic Variant Successful Even on Oral Squamous Cell Carcinoma?
Aydemir, Levent; Bireller, Elif Sinem; Avci, Hakan; Boy Metin, Zeynep; Deger, Kemal; Unur, Meral; Cakmakoglu, Bedia
2017-01-01
Oral squamous cell carcinoma (OSCC) is one of the most widespread cancer types that arise from different sites of oral cavity and has a 5-year survival rate. This study is aimed at investigating the human oxoguanine glycosylase 1 (hOGG1)-Ser326Cys and APE-Asp148Glu polymorphisms of DNA repair genes in OSCC. We investigated the hOGG1-Ser326Cys and APE-Asp148Glu polymorphisms of DNA repair genes in the oral cavity. Genotyping was conducted using polymerase chain reaction-restriction fragment length polymorphism analysis based on 132 patients who were diagnosed as having OSCC and 160 healthy subjects. Individuals with the genotype hOGG1-Ser326Cys, Cys allele carriers, were found significantly more frequently in the patient group compared to the control group as increase in risk (p < 0.001). Furthermore, it was observed that there were significantly more individuals with the Ser allele in the control group (p < 0.001). Individuals with genotype APE-Asp148Glu were not statistically significant; however, they were still more in the control group and provided protection against the disease. Our findings showed that hOGG1-Ser326Cys Cys allele is statistically important and relevant with respect to the development of oral squamous cancer. In view of our results, further studies including expression levels are required in which hOGG1-Ser326Cys should be investigated as molecular biomarkers for the early prediction of squamous cell carcinoma. © 2017 S. Karger AG, Basel.
Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung
2007-03-01
The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection
Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-01-01
Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.
Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-08-18
The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
An error criterion for determining sampling rates in closed-loop control systems
NASA Technical Reports Server (NTRS)
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O
2015-02-01
To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
Feeny, Norah C; Zoellner, Lori A; Mavissakalian, Matig R; Roy-Byrne, Peter P
2009-01-01
Both sertraline (SER) and prolonged exposure (PE) are empirically supported treatments for chronic posttraumatic stress disorder (PTSD). While efficacious, these treatments are quite different in approach, and such differences may influence both treatment choice and treatment outcome. To date, we know very little about the relative efficacy of pharmacological and psychological treatments for chronic PTSD. In Study 1, we compared rates of treatment choice (SER or PE) in 74 trauma-exposed women. In Study 2, we extended this work to an open-choice treatment trial, in which 31 female assault survivors with chronic PTSD received their choice of SER or PE for ten weeks and were followed over time. In Study 1 (82%) and Study 2 (74.2%), the majority of women chose PE. In Study 2, both SER and PE evidenced moderate to large unadjusted effect sizes, with evidence of an advantage for PE in propensity adjusted analyses at posttreatment. Women with co-occurring major depressive disorder (MDD) were more likely to choose SER than those without MDD. However, among those with MDD, the advantage of PE was particularly evident. Our results highlight the presence of clear treatment preferences for PTSD and their potential impact on outcome. This study underscores the importance of systematic study of patient preferences and encourages a rethinking of one-size fits all approaches to treatment for mental disorders. (c) 2009 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Rauert, Cassandra; Lazarov, Borislav; Harrad, Stuart; Covaci, Adrian; Stranger, Marianne
2014-01-01
The widespread use of flame retardants (FRs) in indoor products has led to their ubiquitous distribution within indoor microenvironments with many studies reporting concentrations in indoor air and dust. Little information is available however on emission of these compounds to air, particularly the measurement of specific emission rates (SERs), or the migration pathways leading to dust contamination. Such knowledge gaps hamper efforts to develop understanding of human exposure. This review summarizes published data on SERs of the following FRs released from treated products: polybrominated diphenyl ethers (PBDEs), hexabromocyclododecanes (HBCDs), tetrabromobisphenol-A (TBBPA), novel brominated flame retardants (NBFRs) and organophosphate flame retardants (PFRs), including a brief discussion of the methods used to derive these SERs. Also reviewed are published studies that utilize emission chambers for investigations/measurements of mass transfer of FRs to dust, discussing the chamber configurations and methods used for these experiments. A brief review of studies investigating correlations between concentrations detected in indoor air/dust and possible sources in the microenvironment is included along with efforts to model contamination of indoor environments. Critical analysis of the literature reveals that the major limitations with utilizing chambers to derive SERs for FRs arise due to the physicochemical properties of FRs. In particular, increased partitioning to chamber surfaces, airborne particles and dust, causes loss through “sink” effects and results in long times to reach steady state conditions inside the chamber. The limitations of chamber experiments are discussed as well as their potential for filling gaps in knowledge in this area.
Wang, Yu W; Doerksen, Josh D; Kang, Soyoung; Walsh, Daniel; Yang, Qian; Hong, Daniel; Liu, Jonathan T C
2016-10-01
There is a need for intraoperative imaging technologies to guide breast-conserving surgeries and to reduce the high rates of re-excision for patients in which residual tumor is found at the surgical margins during postoperative pathology analyses. Feasibility studies have shown that utilizing topically applied surface-enhanced Raman scattering (SERS) nanoparticles (NPs), in conjunction with the ratiometric imaging of targeted versus untargeted NPs, enables the rapid visualization of multiple cell-surface biomarkers of cancer that are overexpressed at the surfaces of freshly excised breast tissues. In order to reliably and rapidly perform multiplexed Raman-encoded molecular imaging of large numbers of biomarkers (with five or more NP flavors), an enhanced staining method has been developed in which tissue surfaces are cyclically dipped into an NP-staining solution and subjected to high-frequency mechanical vibration. This dipping and mechanical vibration (DMV) method promotes the convection of the SERS NPs at fresh tissue surfaces, which accelerates their binding to their respective biomarker targets. By utilizing a custom-developed device for automated DMV staining, this study demonstrates the ability to simultaneously image four cell-surface biomarkers of cancer at the surfaces of fresh human breast tissues with a mixture of five flavors of SERS NPs (four targeted and one untargeted control) topically applied for 5 min and imaged at a spatial resolution of 0.5 mm and a raster-scanned imaging rate of >5 cm 2 min -1 . © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Technological Advancements and Error Rates in Radiation Therapy Delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui
2011-11-15
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F
2016-01-01
In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.
Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Fisher, Brad L.; Wolff, David B.
2007-01-01
This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.
Approximation of Bit Error Rates in Digital Communications
2007-06-01
and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase
Arba-Mosquera, Samuel; Aslanides, Ioannis M.
2012-01-01
Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System
Yu, Fei; Sun, Qian
2014-01-01
Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115
Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.
Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D
2016-10-01
Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Ikegami, Yosuke; Goto, Hidemasa; Kiyono, Tohru; Enomoto, Masato; Kasahara, Kousuke; Tomono, Yasuko; Tozawa, Keiichi; Morita, Akimichi; Kohri, Kenjiro; Inagaki, Masaki
2008-12-26
We previously reported Chk1 to be phosphorylated at Ser286 and Ser301 by cyclin-dependent kinase (Cdk) 1 during mitosis [T. Shiromizu et al., Genes Cells 11 (2006) 477-485]. Here, we demonstrated that Chk1-Ser286 and -Ser301 phosphorylation also occurs in hydroxyurea (HU)-treated or ultraviolet (UV)-irradiated cells. Unlike the mitosis case, however, Chk1 was phosphorylated not only at Ser286 and Ser301 but also at Ser317 and Ser345 in the checkpoint response. Treatment with Cdk inhibitors diminished Chk1 phosphorylation at Ser286 and Ser301 but not at Ser317 and Ser345 with the latter. In vitro analyses revealed Ser286 and Ser301 on Chk1 to serve as two major phosphorylation sites for Cdk2. Immunoprecipitation analyses further demonstrated that Ser286/Ser301 and Ser317/Ser345 phosphorylation occur in the same Chk1 molecule during the checkpoint response. In addition, Ser286/Ser301 phosphorylation by Cdk2 was observed in Chk1 mutated to Ala at Ser317 and Ser345 (S317A/S345A), as well as Ser317/Ser345 phosphorylation by ATR was in S286A/S301A. Therefore, Chk1 phosphorylation in the checkpoint response is regulated not only by ATR but also by Cdk2.
Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang
2018-05-04
The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2013 CFR
2013-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2014 CFR
2014-10-01
....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2012 CFR
2012-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2011 CFR
2011-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
Impact of an antiretroviral stewardship strategy on medication error rates.
Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E
2018-05-02
The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Gewandter, Jennifer S; Bambara, Robert A
2011-01-01
DNA damage, stalled replication forks, errors in mRNA splicing and availability of nutrients activate specific phosphatidylinositiol-3-kinase-like kinases (PIKKs) that in turn phosphorylate downstream targets such as p53 on serine 15. While the PIKK proteins ATM and ATR respond to specific DNA lesions, SMG1 responds to errors in mRNA splicing and when cells are exposed to genotoxic stress. Yet, whether genotoxic stress activates SMG1 through specific types of DNA lesions or RNA damage remains poorly understood. Here, we demonstrate that siRNA oligonucleotides targeting the mRNA surveillance proteins SMG1, Upf1, Upf2 or the PIKK protein ATM attenuated p53 (ser15) phosphorylation in cells damaged by high oxygen (hyperoxia), a model of persistent oxidative stress that damages nucleotides. In contrast, loss of SMG1 or ATM, but not Upf1 or Upf2 reduced p53 (ser15) phosphorylation in response to DNA double strand breaks produced by expression of the endonuclease I-PpoI. To determine whether SMG1-dependent activation of p53 was in response to oxidative mRNA damage, mRNA encoding green fluorescence protein (GFP) transcribed in vitro was oxidized by Fenton chemistry and transfected into cells. Although oxidation of GFP mRNA resulted in dose-dependent fragmentation of the mRNA and reduced expression of GFP, it did not stimulate p53 or the p53-target gene p21. These findings establish SMG1 activates p53 in response to DNA double strand breaks independent of the RNA surveillance proteins Upf1 or Upf2; however, these proteins can stimulate p53 in response to oxidative stress but not necessarily oxidized RNA. PMID:21701263
Dunsmore, Kimberly P; Devidas, Meenakshi; Linda, Stephen B; Borowitz, Michael J; Winick, Naomi; Hunger, Stephen P; Carroll, William L; Camitta, Bruce M
2012-08-01
Children's Oncology Group study AALL00P2 was designed to assess the feasibility and safety of adding nelarabine to a BFM 86-based chemotherapy regimen in children with newly diagnosed T-cell acute lymphoblastic leukemia (T-ALL). In stage one of the study, eight patients with a slow early response (SER) by prednisone poor response (PPR; ≥ 1,000 peripheral blood blasts on day 8 of prednisone prephase) received chemotherapy plus six courses of nelarabine 400 mg/m(2) once per day; four patients with SER by high minimal residual disease (MRD; ≥ 1% at day 36 of induction) received chemotherapy plus five courses of nelarabine; 16 patients with a rapid early response (RER) received chemotherapy without nelarabine. In stage two, all patients received six 5-day courses of nelarabine at 650 mg/m(2) once per day (10 SER patients [one by MRD, nine by PPR]) or 400 mg/m(2) once per day (38 RER patients; 12 SER patients [three by MRD, nine by PPR]). The only significant difference in toxicities was decreased neutropenic infections in patients treated with nelarabine (42% with v 81% without nelarabine). Five-year event-free survival (EFS) rates were 73% for 11 stage one SER patients and 67% for 22 stage two SER patients treated with nelarabine versus 69% for 16 stage one RER patients treated without nelarabine and 74% for 38 stage two RER patients treated with nelarabine. Five-year EFS for all patients receiving nelarabine (n = 70) was 73% versus 69% for those treated without nelarabine (n = 16). Addition of nelarabine to a BFM 86-based chemotherapy regimen was well tolerated and produced encouraging results in pediatric patients with T-ALL, particularly those with a SER, who have historically fared poorly.
Dunsmore, Kimberly P.; Devidas, Meenakshi; Linda, Stephen B.; Borowitz, Michael J.; Winick, Naomi; Hunger, Stephen P.; Carroll, William L.; Camitta, Bruce M.
2012-01-01
Purpose Children's Oncology Group study AALL00P2 was designed to assess the feasibility and safety of adding nelarabine to a BFM 86–based chemotherapy regimen in children with newly diagnosed T-cell acute lymphoblastic leukemia (T-ALL). Patients and Methods In stage one of the study, eight patients with a slow early response (SER) by prednisone poor response (PPR; ≥ 1,000 peripheral blood blasts on day 8 of prednisone prephase) received chemotherapy plus six courses of nelarabine 400 mg/m2 once per day; four patients with SER by high minimal residual disease (MRD; ≥ 1% at day 36 of induction) received chemotherapy plus five courses of nelarabine; 16 patients with a rapid early response (RER) received chemotherapy without nelarabine. In stage two, all patients received six 5-day courses of nelarabine at 650 mg/m2 once per day (10 SER patients [one by MRD, nine by PPR]) or 400 mg/m2 once per day (38 RER patients; 12 SER patients [three by MRD, nine by PPR]). Results The only significant difference in toxicities was decreased neutropenic infections in patients treated with nelarabine (42% with v 81% without nelarabine). Five-year event-free survival (EFS) rates were 73% for 11 stage one SER patients and 67% for 22 stage two SER patients treated with nelarabine versus 69% for 16 stage one RER patients treated without nelarabine and 74% for 38 stage two RER patients treated with nelarabine. Five-year EFS for all patients receiving nelarabine (n = 70) was 73% versus 69% for those treated without nelarabine (n = 16). Conclusion Addition of nelarabine to a BFM 86–based chemotherapy regimen was well tolerated and produced encouraging results in pediatric patients with T-ALL, particularly those with a SER, who have historically fared poorly. PMID:22734022
Kotsakis, Stathis D; Miriagou, Vivi; Tzelepi, Eva; Tzouvelekis, Leonidas S
2010-11-01
In GES-type β-lactamases, positions 104 and 170 are occupied by Glu or Lys and by Gly, Asn, or Ser, respectively. Previous studies have indicated an important role of these amino acids in the interaction with β-lactams, although their precise role, especially that of residue 104, remains uncertain. In this study, we constructed GES-1 (Glu104, Gly170), GES-2 (Glu104, Asn170), GES-5 (Glu104, Ser170), GES-6 (Lys104, Ser170), GES-7 (Lys104, Gly170), and GES-13 (Lys104, Asn170) by site-specific mutagenesis and compared their hydrolytic properties. Isogenic comparisons of β-lactam resistance levels conferred by these GES variants were also performed. Data indicated the following patterns: (i) Lys104-containing enzymes exhibited enhanced hydrolysis of oxyimino-cephalosporins and reduced efficiency against imipenem in relation to enzymes possessing Glu104, (ii) Asn170-containing enzymes showed reduced hydrolysis rates of penicillins and older cephalosporins, (iii) Ser170 enabled GES to hydrolyze cefoxitin efficiently, and (iv) Asn170 and Ser170 increased the carbapenemase character of GES enzymes but reduced their activity against ceftazidime. Molecular dynamic simulations of GES apoenzyme models, as well as construction of GES structures complexed with cefoxitin and an achiral ceftazidime-like boronic acid, provided insights into the catalytic behavior of the studied mutants. There were indications that an increased stability of the hydrogen bonding network of Glu166-Lys73-Ser70 and an altered positioning of Trp105 correlated with the substrate spectra, especially with acylation of GES by imipenem. Furthermore, likely effects of Ser170 on GES interactions with cefoxitin and of Lys104 on interactions with oxyimino-cephalosporins were revealed. Overall, the data unveiled the importance of residues 104 and 170 in the function of GES enzymes.
Marsili, V; Nardicchi, V; Lupidi, G; Brozzetti, A; Gianfranceschi, G L
1996-12-01
Small acidic phosphorylated chromatin peptides show regulatory activity on gene expression. The peptide pyroGlu-Asp-Asp-Ser-Asp-Glu-Glu-Asn, synthesized on the basis of structural and biochemical studies, shows functional properties in vitro (phosphorylation by casein kinase II, control of DNA transcription by RNA polymerase II, inhibition of proliferation and promotion of differentiation in some cell lines) very similar to those of native chromatin peptides. In this report we show that the dansylated octapeptide Dns-Glu-Asp-Asp-Ser-Asp-Glu-Glu-Asn remarkably inhibits cell growth of the HL-60 cell line. The biological effect of the peptide seems to be considerably higher than that shown by the nondansylated peptide, and it cannot be attributed to a toxic effect of the Dns group. The measurement of uptake of 3H-labelled Glu-Asp-Asp-Ser-Asp-Glu-Glu-Asn demonstrates that it is unable to pass through the HL-60 cell membrane. It is our considered opinion that the addition of hydrophobic groups to the peptide N-terminus should increase the biological activity by improving its transport through the cellular membrane.
Karthigeyan, Dhanasekaran; Siddhanta, Soumik; Kishore, Annavarapu Hari; Perumal, Sathya S R R; Ågren, Hans; Sudevan, Surabhi; Bhat, Akshay V; Balasubramanyam, Karanam; Subbegowda, Rangappa Kanchugarakoppal; Kundu, Tapas K; Narayana, Chandrabhas
2014-07-22
We demonstrate the use of surface-enhanced Raman spectroscopy (SERS) as an excellent tool for identifying the binding site of small molecules on a therapeutically important protein. As an example, we show the specific binding of the common antihypertension drug felodipine to the oncogenic Aurora A kinase protein via hydrogen bonding interactions with Tyr-212 residue to specifically inhibit its activity. Based on SERS studies, molecular docking, molecular dynamics simulation, biochemical assays, and point mutation-based validation, we demonstrate the surface-binding mode of this molecule in two similar hydrophobic pockets in the Aurora A kinase. These binding pockets comprise the same unique hydrophobic patches that may aid in distinguishing human Aurora A versus human Aurora B kinase in vivo. The application of SERS to identify the specific interactions between small molecules and therapeutically important proteins by differentiating competitive and noncompetitive inhibition demonstrates its ability as a complementary technique. We also present felodipine as a specific inhibitor for oncogenic Aurora A kinase. Felodipine retards the rate of tumor progression in a xenografted nude mice model. This study reveals a potential surface pocket that may be useful for developing small molecules by selectively targeting the Aurora family kinases.
Zhang, Yunfei; Liu, Haoran; Tang, Jiali; Li, Zhuoyun; Zhou, Xingyu; Zhang, Ren; Chen, Liang; Mao, Ying; Li, Cong
2017-05-31
A handheld Raman detector with operational convenience, high portability, and rapid acquisition rate has been applied in clinics for diagnostic purposes. However, the inherent weakness of Raman scattering and strong scattering of the turbid tissue restricts its utilization to superficial locations. To extend the applications of a handheld Raman detector to deep tissues, a gold nanostar-based surface-enhanced Raman scattering (SERS) nanoprobe with robust colloidal stability, a fingerprint-like spectrum, and extremely high sensitivity (5.0 fM) was developed. With the assistance of FPT, a multicomponent optical clearing agent (OCA) efficiently suppressing light scattering from the turbid dermal tissues, the handheld Raman detector noninvasively visualized the subcutaneous tumor xenograft with a high target-to-background ratio after intravenous injection of the gold nanostar-based SERS nanoprobe. To the best of our knowledge, this work is the first example to introduce the optical clearing technique in assisting SERS imaging in vivo. The combination of optical clearing technology and SERS is a promising strategy for the extension of the clinical applications of the handheld Raman detector from superficial tissues to subcutaneous or even deeper lesions that are usually "concealed" by the turbid dermal tissue.
Spencer, Bruce D
2012-06-01
Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Errors in laboratory medicine: practical lessons to improve patient safety.
Howanitz, Peter J
2005-10-01
Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.
Shaghaghi, Zahra; Abedi, Seyed Mohammad; Hosseinimehr, Seyed Jalal
2018-05-15
The early diagnosis of non-small cell lung cancer (NSCLC) is important for increasing survival rate and improving quality life of patients. The aim of this study was to investigate 99m Tc-(tricine)-HYNIC-(Ser) 3 -J18 for targeting and imaging of NSCLC in A-549 xenografted nude mice. The (Ser) 3 -J18 peptide was conjugated with HYNIC and labeled with 99m Tc using tricine as a co-ligand. The radiolabeled peptide was evaluated for its radiochemical purity, stability, receptor binding and internalization in vitro. The future experiments were performed for tumor targeting and imaging in A-549 tumor-bearing mice. 99m Tc-(tricine)-HYNIC-(Ser) 3 -J18 was obtained at high labeling efficiency at room temperature and favorable stability in saline and human plasma. At the cellular level, the radiolabeled peptide specifically bond to A-549 cells with a K D 4.1 ± 1.3 nM. Biodistribution study revealed tumor to blood and tumor to muscle ratios were about 3.12 and 5.63 respectively after 2 h injection of radiolabeled peptide. These ratios were significantly decreased by co-injection of excess non-labeled peptide in mice. This radiolabeled peptide selectively targeted to NSCLC tumor and exhibited a high target uptake combined with acceptable low background activity for tumor imaging in mice. The results of this study and its comparison with another study showed that 99m Tc-(tricine)-HYNIC-(Ser) 3 -J18 is better than previously reported radiolabeled peptide as 99m Tc-(EDDA/tricine)-HYNIC-(Ser) 3 -J18 for NSCLC targeting and imaging. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Rapid Determination of Thiabendazole Pesticides in Rape by Surface Enhanced Raman Spectroscopy
Lin, Lei; Nie, Pengcheng; Qu, Fangfang; Chu, Bingquan; Xiao, Shupei
2018-01-01
Thiabendazole is widely used in sclerotium blight, downy mildew and black rot prevention and treatment in rape. Accurate monitoring of thiabendazole pesticides in plants will prevent potential adverse effects to the Environment and human health. Surface Enhanced Raman Spectroscopy (SERS) is a highly sensitive fingerprint with the advantages of simple operation, convenient portability and high detection efficiency. In this paper, a rapid determination method of thiabendazole pesticides in rape was conducted combining SERS with chemometric methods. The original SERS were pretreated and the partial least squares (PLS) was applied to establish the prediction model between SERS and thiabendazole pesticides in rape. As a result, the SERS enhancing effect based on silver Nano-substrate was better than that of gold Nano-substrate, where the detection limit of thiabendazole pesticides in rape could reach 0.1 mg/L. Moreover, 782, 1007 and 1576 cm−1 could be determined as thiabendazole pesticides Raman characteristic peaks in rape. The prediction effect of thiabendazole pesticides in rape was the best (Rp2 = 0.94, RMSEP = 3.17 mg/L) after the original spectra preprocessed with 1st-Derivative, and the linear relevance between thiabendazole pesticides concentration and Raman peak intensity at 782 cm−1 was the highest (R2 = 0.91). Furthermore, five rape samples with unknown thiabendazole pesticides concentration were used to verify the accuracy and reliability of this method. It was showed that prediction relative standard deviation was 0.70–9.85%, recovery rate was 94.71–118.92% and t value was −1.489. In conclusion, the thiabendazole pesticides in rape could be rapidly and accurately detected by SERS, which was beneficial to provide a rapid, accurate and reliable scheme for the detection of pesticides residues in agriculture products. PMID:29617288
Rapid Determination of Thiabendazole Pesticides in Rape by Surface Enhanced Raman Spectroscopy.
Lin, Lei; Dong, Tao; Nie, Pengcheng; Qu, Fangfang; He, Yong; Chu, Bingquan; Xiao, Shupei
2018-04-04
Thiabendazole is widely used in sclerotium blight, downy mildew and black rot prevention and treatment in rape. Accurate monitoring of thiabendazole pesticides in plants will prevent potential adverse effects to the Environment and human health. Surface Enhanced Raman Spectroscopy (SERS) is a highly sensitive fingerprint with the advantages of simple operation, convenient portability and high detection efficiency. In this paper, a rapid determination method of thiabendazole pesticides in rape was conducted combining SERS with chemometric methods. The original SERS were pretreated and the partial least squares (PLS) was applied to establish the prediction model between SERS and thiabendazole pesticides in rape. As a result, the SERS enhancing effect based on silver Nano-substrate was better than that of gold Nano-substrate, where the detection limit of thiabendazole pesticides in rape could reach 0.1 mg/L. Moreover, 782, 1007 and 1576 cm −1 could be determined as thiabendazole pesticides Raman characteristic peaks in rape. The prediction effect of thiabendazole pesticides in rape was the best ( R p 2 = 0.94, RMSEP = 3.17 mg/L) after the original spectra preprocessed with 1st-Derivative, and the linear relevance between thiabendazole pesticides concentration and Raman peak intensity at 782 cm −1 was the highest ( R² = 0.91). Furthermore, five rape samples with unknown thiabendazole pesticides concentration were used to verify the accuracy and reliability of this method. It was showed that prediction relative standard deviation was 0.70–9.85%, recovery rate was 94.71–118.92% and t value was −1.489. In conclusion, the thiabendazole pesticides in rape could be rapidly and accurately detected by SERS, which was beneficial to provide a rapid, accurate and reliable scheme for the detection of pesticides residues in agriculture products.
Mariadasse, Richard; Biswal, Jayashree; Jayaprakash, Prajisha; Rao, Guru Raj; Choubey, Sanjay Kumar; Rajendran, Santhosh; Jeyakanthan, Jeyaraman
2016-01-01
Transketolase is a connecting link between glycolytic and pentose phosphate pathway, which is considered as the rate-limiting step due to synthesis of large number of ATP molecule and it can be proposed as a plausible target facilitating the growth of cancerous cells suggesting its potential role in cancer. Oxythiamine, an antimetabolite has been proved to be an efficient anticancerous compound in vitro, but its structural elucidation of the inhibitory mechanism has not yet been done against the human transketolase-like 1 protein (TKTL1). The three-dimensional (3D) structure of TKTL1 protein was modeled and subjected for refinement, stability and validation. Based on the reported homologs of transketolase (TKT), the active site residues His46, Ser49, Ser52, Ser53, Ile56, Leu82, Lys84, Leu123, Ser125, Glu128, Asp154, His160, Thr216 and Lys218 were identified and considered for molecular-modeling studies. Docking studies reveal the H-bond interactions with residues Ser49 and Lys218 that could play a major role in the activity of TKTL1. Molecular dynamics (MD) simulation study was performed to reveal the comparative stability of both native and complex forms of TKTL1. MD trajectory at 30 ns, confirm the role of active site residues Ser49, Lys84, Glu128, His160 and Lys218 in suppressing the activity of TKTL1. Glu128 is observed to be the most important residue for deprotonation state of the aminopyrimidine moiety and preferred to be the site of inhibitory action. Thus, the proposed mechanism of inhibition through in silico studies would pave the way for structure-oriented drug designing against cancer.
The statistical validity of nursing home survey findings.
Woolley, Douglas C
2011-11-01
The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.
How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?
Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina
2015-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615
How does aging affect the types of error made in a visual short-term memory 'object-recall' task?
Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina
2014-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.
Structural and Functional Adaptation of Vancomycin Resistance VanT Serine Racemases.
Meziane-Cherif, Djalal; Stogios, Peter J; Evdokimova, Elena; Egorova, Olga; Savchenko, Alexei; Courvalin, Patrice
2015-08-11
Vancomycin resistance in Gram-positive bacteria results from the replacement of the D-alanyl-D-alanine target of peptidoglycan precursors with D-alanyl-D-lactate or D-alanyl-D-serine (D-Ala-D-Ser), to which vancomycin has low binding affinity. VanT is one of the proteins required for the production of D-Ala-D-Ser-terminating precursors by converting L-Ser to D-Ser. VanT is composed of two domains, an N-terminal membrane-bound domain, likely involved in L-Ser uptake, and a C-terminal cytoplasmic catalytic domain which is related to bacterial alanine racemases. To gain insight into the molecular function of VanT, the crystal structure of the catalytic domain of VanTG from VanG-type resistant Enterococcus faecalis BM4518 was determined. The structure showed significant similarity to type III pyridoxal 5'-phosphate (PLP)-dependent alanine racemases, which are essential for peptidoglycan synthesis. Comparative structural analysis between VanTG and alanine racemases as well as site-directed mutagenesis identified three specific active site positions centered around Asn696 which are responsible for the L-amino acid specificity. This analysis also suggested that VanT racemases evolved from regular alanine racemases by acquiring additional selectivity toward serine while preserving that for alanine. The 4-fold-lower relative catalytic efficiency of VanTG against L-Ser versus L-Ala implied that this enzyme relies on its membrane-bound domain for L-Ser transport to increase the overall rate of d-Ser production. These findings illustrate how vancomycin pressure selected for molecular adaptation of a housekeeping enzyme to a bifunctional enzyme to allow for peptidoglycan remodeling, a strategy increasingly observed in antibiotic-resistant bacteria. Vancomycin is one of the drugs of last resort against Gram-positive antibiotic-resistant pathogens. However, bacteria have evolved a sophisticated mechanism which remodels the drug target, the D-alanine ending precursors in cell wall synthesis, into precursors terminating with D-lactate or D-serine, to which vancomycin has less affinity. D-Ser is synthesized by VanT serine racemase, which has two unusual characteristics: (i) it is one of the few serine racemases identified in bacteria and (ii) it contains a membrane-bound domain involved in L-Ser uptake. The structure of the catalytic domain of VanTG showed high similarity to alanine racemases, and we identified three specific active site substitutions responsible for L-Ser specificity. The data provide the molecular basis for VanT evolution to a bifunctional enzyme coordinating both transport and racemization. Our findings also illustrate the evolution of the essential alanine racemase into a vancomycin resistance enzyme in response to antibiotic pressure. Copyright © 2015 Meziane-Cherif et al.
Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.
Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep
2014-01-01
Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
Error rate information in attention allocation pilot models
NASA Technical Reports Server (NTRS)
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
7 CFR 275.23 - Determination of State agency program performance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-05-01
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
Mechanism of Ribonuclease III Catalytic Regulation by Serine Phosphorylation
NASA Astrophysics Data System (ADS)
Gone, Swapna; Alfonso-Prieto, Mercedes; Paudyal, Samridhdi; Nicholson, Allen W.
2016-05-01
Ribonuclease III (RNase III) is a conserved, gene-regulatory bacterial endonuclease that cleaves double-helical structures in diverse coding and noncoding RNAs. RNase III is subject to multiple levels of control, reflective of its global regulatory functions. Escherichia coli (Ec) RNase III catalytic activity is known to increase during bacteriophage T7 infection, reflecting the expression of the phage-encoded protein kinase, T7PK. However, the mechanism of catalytic enhancement is unknown. This study shows that Ec-RNase III is phosphorylated on serine in vitro by purified T7PK, and identifies the targets as Ser33 and Ser34 in the N-terminal catalytic domain. Kinetic experiments reveal a 5-fold increase in kcat and a 1.4-fold decrease in Km following phosphorylation, providing a 7.4-fold increase in catalytic efficiency. Phosphorylation does not change the rate of substrate cleavage under single-turnover conditions, indicating that phosphorylation enhances product release, which also is the rate-limiting step in the steady-state. Molecular dynamics simulations provide a mechanism for facilitated product release, in which the Ser33 phosphomonoester forms a salt bridge with the Arg95 guanidinium group, thereby weakening RNase III engagement of product. The simulations also show why glutamic acid substitution at either serine does not confer enhancement, thus underscoring the specific requirement for a phosphomonoester.
Newman, Craig G J; Bevins, Adam D; Zajicek, John P; Hodges, John R; Vuillermoz, Emil; Dickenson, Jennifer M; Kelly, Denise S; Brown, Simona; Noad, Rupert F
2018-01-01
Ensuring reliable administration and reporting of cognitive screening tests are fundamental in establishing good clinical practice and research. This study captured the rate and type of errors in clinical practice, using the Addenbrooke's Cognitive Examination-III (ACE-III), and then the reduction in error rate using a computerized alternative, the ACEmobile app. In study 1, we evaluated ACE-III assessments completed in National Health Service (NHS) clinics ( n = 87) for administrator error. In study 2, ACEmobile and ACE-III were then evaluated for their ability to capture accurate measurement. In study 1, 78% of clinically administered ACE-IIIs were either scored incorrectly or had arithmetical errors. In study 2, error rates seen in the ACE-III were reduced by 85%-93% using ACEmobile. Error rates are ubiquitous in routine clinical use of cognitive screening tests and the ACE-III. ACEmobile provides a framework for supporting reduced administration, scoring, and arithmetical error during cognitive screening.
Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B
2013-09-01
In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
Tully, Mary P; Buchan, Iain E
2009-12-01
To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.
A refinement of the combination equations for evaporation
Milly, P.C.D.
1991-01-01
Most combination equations for evaporation rely on a linear expansion of the saturation vapor-pressure curve around the air temperature. Because the temperature at the surface may differ from this temperature by several degrees, and because the saturation vapor-pressure curve is nonlinear, this approximation leads to a certain degree of error in those evaporation equations. It is possible, however, to introduce higher-order polynomial approximations for the saturation vapor-pressure curve and to derive a family of explicit equations for evaporation, having any desired degree of accuracy. Under the linear approximation, the new family of equations for evaporation reduces, in particular cases, to the combination equations of H. L. Penman (Natural evaporation from open water, bare soil and grass, Proc. R. Soc. London, Ser. A193, 120-145, 1948) and of subsequent workers. Comparison of the linear and quadratic approximations leads to a simple approximate expression for the error associated with the linear case. Equations based on the conventional linear approximation consistently underestimate evaporation, sometimes by a substantial amount. ?? 1991 Kluwer Academic Publishers.
An algorithm of improving speech emotional perception for hearing aid
NASA Astrophysics Data System (ADS)
Xi, Ji; Liang, Ruiyu; Fei, Xianju
2017-07-01
In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.
On a method for generating inequalities for the zeros of certain functions
NASA Astrophysics Data System (ADS)
Gatteschi, Luigi; Giordano, Carla
2007-10-01
In this paper we describe a general procedure which yields inequalities satisfied by the zeros of a given function. The method requires the knowledge of a two-term approximation of the function with bound for the error term. The method was successfully applied many years ago [L. Gatteschi, On the zeros of certain functions with application to Bessel functions, Nederl. Akad. Wetensch. Proc. Ser. 55(3)(1952), Indag. Math. 14(1952) 224-229] and more recently too [L. Gatteschi and C. Giordano, Error bounds for McMahon's asymptotic approximations of the zeros of the Bessel functions, Integral Transform Special Functions, 10(2000) 41-56], to the zeros of the Bessel functions of the first kind. Here, we present the results of the application of the method to get inequalities satisfied by the zeros of the derivative of the function . This function plays an important role in the asymptotic study of the stationary points of the solutions of certain differential equations.
Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.
2014-01-01
Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138
Yu, Xiantong; He, XiaoXiao; Yang, Taiqun; Zhao, Litao; Chen, Qichen; Zhang, Sanjun; Chen, Jinquan; Xu, Jianhua
2018-01-01
Dopamine (DA) is an important neurotransmitter in the hypothalamus and pituitary gland, which can produce a direct influence on mammals' emotions in midbrain. Additionally, the level of DA is highly related with some important neurologic diseases such as schizophrenia, Parkinson, and Huntington's diseases, etc. In light of the important roles that DA plays in the disease modulation, it is of considerable significance to develop a sensitive and reproducible approach for monitoring DA. The objective of this study was to develop an efficient approach to quantitatively monitor the level of DA using Ag nanoparticle (NP) dimers and enhanced Raman spectroscopy. Ag NP dimers were synthesized for the sensitive detection of DA via surface-enhanced Raman scattering (SERS). Citrate was used as both the capping agent of NPs and sensing agent to DA, which is self-assembled on the surface of Ag NP dimers by reacting with the surface carboxyl group to form a stable amide bond. To improve accuracy and precision, the multiplicative effects model for surface-enhanced Raman spectroscopy was utilized to analyze the SERS assays. A low limits of detection (LOD) of 20 pM and a wide linear response range from 30 pM to 300 nM were obtained for DA quantitative detection. The SERS enhancement factor was theoretically valued at approximately 10 7 by discrete dipole approximation. DA was self-assembled on the citrate capped surface of Ag NPs dimers through the amide bond. The adsorption energy was estimated to be 256 KJ/mol using the Langmuir isotherm model. The density functional theory was used to simulate the spectral characteristics of SERS during the adsorption of DA on the surface of the Ag dimers. Furthermore, to improve the accuracy and precision of quantitative analysis of SERS assays with a multiplicative effects model for surface-enhanced Raman spectroscopy. A LOD of 20 pM DA-level was obtained, and the linear response ranged from 30 pM to 300 nM for quantitative DA detection. The absolute relative percentage error was 4.22% between the real and predicted DA concentrations. This detection scheme is expected to have good applications in the prevention and diagnosis of certain diseases caused by disorders in the DA level.
Yu, Xiantong; He, XiaoXiao; Yang, Taiqun; Zhao, Litao; Chen, Qichen; Zhang, Sanjun; Chen, Jinquan; Xu, Jianhua
2018-01-01
Background Dopamine (DA) is an important neurotransmitter in the hypothalamus and pituitary gland, which can produce a direct influence on mammals’ emotions in midbrain. Additionally, the level of DA is highly related with some important neurologic diseases such as schizophrenia, Parkinson, and Huntington’s diseases, etc. In light of the important roles that DA plays in the disease modulation, it is of considerable significance to develop a sensitive and reproducible approach for monitoring DA. Purpose The objective of this study was to develop an efficient approach to quantitatively monitor the level of DA using Ag nanoparticle (NP) dimers and enhanced Raman spectroscopy. Methods Ag NP dimers were synthesized for the sensitive detection of DA via surface-enhanced Raman scattering (SERS). Citrate was used as both the capping agent of NPs and sensing agent to DA, which is self-assembled on the surface of Ag NP dimers by reacting with the surface carboxyl group to form a stable amide bond. To improve accuracy and precision, the multiplicative effects model for surface-enhanced Raman spectroscopy was utilized to analyze the SERS assays. Results A low limits of detection (LOD) of 20 pM and a wide linear response range from 30 pM to 300 nM were obtained for DA quantitative detection. The SERS enhancement factor was theoretically valued at approximately 107 by discrete dipole approximation. DA was self-assembled on the citrate capped surface of Ag NPs dimers through the amide bond. The adsorption energy was estimated to be 256 KJ/mol using the Langmuir isotherm model. The density functional theory was used to simulate the spectral characteristics of SERS during the adsorption of DA on the surface of the Ag dimers. Furthermore, to improve the accuracy and precision of quantitative analysis of SERS assays with a multiplicative effects model for surface-enhanced Raman spectroscopy. Conclusion A LOD of 20 pM DA-level was obtained, and the linear response ranged from 30 pM to 300 nM for quantitative DA detection. The absolute relative percentage error was 4.22% between the real and predicted DA concentrations. This detection scheme is expected to have good applications in the prevention and diagnosis of certain diseases caused by disorders in the DA level. PMID:29713165
Lee, Mian Rong; Lee, Hiang Kwee; Yang, Yijie; Koh, Charlynn Sher Lin; Lay, Chee Leng; Lee, Yih Hong; Phang, In Yee; Ling, Xing Yi
2017-11-15
We demonstrate a one-step precise direct metal writing of well-defined and densely packed gold nanoparticle (AuNP) patterns with tunable physical and optical properties. We achieve this by using two-photon lithography on a Au precursor comprising poly(vinylpyrrolidone) (PVP) and ethylene glycol (EG), where EG promotes higher reduction rates of Au(III) salt via polyol reduction. Hence, clusters of monodisperse AuNP are generated along raster scanning of the laser, forming high-particle-density, well-defined structures. By varying the PVP concentration, we tune the AuNP size from 27.3 to 65.0 nm and the density from 172 to 965 particles/μm 2 , corresponding to a surface roughness of 12.9 to 67.1 nm, which is important for surface-based applications such as surface-enhanced Raman scattering (SERS). We find that the microstructures exhibit an SERS enhancement factor of >10 5 and demonstrate remote writing of well-defined Au microstructures within a microfluidic channel for the SERS detection of gaseous molecules. We showcase in situ SERS monitoring of gaseous 4-methylbenzenethiol and real-time detection of multiple small gaseous species with no specific affinity to Au. This one-step, laser-induced fabrication of AuNP microstructures ignites a plethora of possibilities to position desired patterns directly onto or within most surfaces for the future creation of multifunctional lab-on-a-chip devices.
The hOGG1 Ser326Cys polymorphism and male subfertility in Taiwanese patients with varicocele.
Chen, S S-S; Chiu, L-P
2018-03-26
To investigate the association between the human 8-oxoguanine DNA glycosylase 1 (hOGG1) gene Ser326Cys polymorphism and male subfertility in Taiwanese patients with varicocele, we made a prospective study. Ninety young male patients with varicocele (group 1), 50 young male patients with subclinical varicocele (group 2) and 30 normal young male patients without varicocele (group 3) were recruited in this study. The hOGG1 null homozygous genotype (Cys/Cys) and the occurrence of a 4,977-bp deletion in mitochondrial DNA and mitochondrial copy number in spermatozoa were determined by polymerase chain reaction. The 8-hydroxy-2'-deoxyguanosine (8-OHdG) content of DNA in the spermatozoa was measured using high-performance liquid chromatography, and total antioxidant capacity (TAC) of seminal plasma was detected electrochemically. The rates of male subfertility were 31.1% (28/90) in group 1 and 22% (11/50) in group 2. Of 39 subfertile men, 74.4% (29/39) had the hOGG1 Cys/Cys genotype. Patients in groups 1 and 2 with hOGG1 Cys/Cys genotype had significantly higher 8-OHdG content in sperm DNA, lower mitochondrial copy number in spermatozoa and lower TAC in seminal plasma than those with Ser/Ser or Ser/Cys genotype. Clinicians should pay more attention to patients with varicocele with the hOGG1 Cys/Cys genotype. © 2018 Blackwell Verlag GmbH.
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang
2018-01-01
The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions. PMID:29734707
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404
Mogull, Scott A
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).
Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro
2016-01-01
There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.
ERIC Educational Resources Information Center
Birjandi, Parviz; Siyyari, Masood
2016-01-01
This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…
National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?
ERIC Educational Resources Information Center
Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.
2010-01-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…
The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency
ERIC Educational Resources Information Center
Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ
2012-01-01
This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 second and 974 third graders. Results found a significant relationship between error rate, oral reading fluency, and reading comprehension performance, and…
What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2013-01-01
This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Decrease in medical command errors with use of a "standing orders" protocol system.
Holliman, C J; Wuerz, R C; Meador, S A
1994-05-01
The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)
Quantifying Data Quality for Clinical Trials Using Electronic Data Capture
Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.
2008-01-01
Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958
NASA Technical Reports Server (NTRS)
Safren, H. G.
1987-01-01
The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.
Chk1 and Mps1 jointly regulate correction of merotelic kinetochore attachments.
Petsalaki, Eleni; Zachos, George
2013-03-01
If uncorrected, merotelic kinetochore attachments can induce mis-segregated chromosomes in anaphase. We show that checkpoint kinase 1 (Chk1) protects vertebrate cells against merotelic attachments and lagging chromosomes and is required for correction of merotelic attachments during a prolonged metaphase. Decreased Chk1 activity leads to hyper-stable kinetochore microtubules, unstable binding of MCAK, Kif2b and Mps1 to centromeres or kinetochores and reduced phosphorylation of Hec1 by Aurora-B. Phosphorylation of Aurora-B at serine 331 (Ser331) by Chk1 is high in prometaphase and decreases significantly in metaphase cells. We propose that Ser331 phosphorylation is required for optimal localization of MCAK, Kif2b and Mps1 to centromeres or kinetochores and for Hec1 phosphorylation. Furthermore, inhibition of Mps1 activity diminishes initial recruitment of MCAK and Kif2b to centromeres or kinetochores, impairs Hec1 phosphorylation and exacerbates merotelic attachments in Chk1-deficient cells. We propose that Chk1 and Mps1 jointly regulate Aurora-B, MCAK, Kif2b and Hec1 to correct merotelic attachments. These results suggest a role for Chk1 and Mps1 in error correction.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
A prospective audit of a nurse independent prescribing within critical care.
Carberry, Martin; Connelly, Sarah; Murphy, Jennifer
2013-05-01
To determine the prescribing activity of different staff groups within intensive care unit (ICU) and combined high dependency unit (HDU), namely trainee and consultant medical staff and advanced nurse practitioners in critical care (ANPCC); to determine the number and type of prescription errors; to compare error rates between prescribing groups and to raise awareness of prescribing activity within critical care. The introduction of government legislation has led to the development of non-medical prescribing roles in acute care. This has facilitated an opportunity for the ANPCC working in critical care to develop a prescribing role. The audit was performed over 7 days (Monday-Sunday), on rolling days over a 7-week period in September and October 2011 in three ICUs. All drug entries made on the ICU prescription by the three groups, trainee medical staff, ANPCCs and consultant anaesthetists, were audited once for errors. Data were collected by reviewing all drug entries for errors namely, patient data, drug dose, concentration, rate and frequency, legibility and prescriber signature. A paper data collection tool was used initially; data was later entered onto a Microsoft Access data base. A total of 1418 drug entries were audited from 77 patient prescription Cardexes. Error rates were reported as, 40 errors in 1418 prescriptions (2·8%): ANPCC errors, n = 2 in 388 prescriptions (0·6%); trainee medical staff errors, n = 33 in 984 (3·4%); consultant errors, n = 5 in 73 (6·8%). The error rates were significantly different for different prescribing groups (p < 0·01). This audit shows that prescribing error rates were low (2·8%). Having the lowest error rate, the nurse practitioners are at least as effective as other prescribing groups within this audit, in terms of errors only, in prescribing diligence. National data is required in order to benchmark independent nurse prescribing practice in critical care. These findings could be used to inform research and role development within the critical care. © 2012 The Authors. Nursing in Critical Care © 2012 British Association of Critical Care Nurses.
Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja
2016-12-21
Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.
2011-01-01
Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2010-01-01
This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…
Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De
2016-01-01
The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).
Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M
2015-06-01
Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.
Performance improvement of robots using a learning control scheme
NASA Technical Reports Server (NTRS)
Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.
1987-01-01
Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.
Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J
2016-09-09
A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.
Citation Help in Databases: The More Things Change, the More They Stay the Same
ERIC Educational Resources Information Center
Van Ullen, Mary; Kessler, Jane
2012-01-01
In 2005, the authors reviewed citation help in databases and found an error rate of 4.4 errors per citation. This article describes a follow-up study that revealed a modest improvement in the error rate to 3.4 errors per citation, still unacceptably high. The most problematic area was retrieval statements. The authors conclude that librarians…
ERIC Educational Resources Information Center
Hodgson, Catherine; Lambon Ralph, Matthew A.
2008-01-01
Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…
Physical fault tolerance of nanoelectronics.
Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N
2011-04-29
The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.
Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.
1999-01-01
Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809
Phosphorylation of cMyBP-C Affects Contractile Mechanisms in a Site-specific Manner
Wang, Li; Ji, Xiang; Barefield, David; Sadayappan, Sakthivel; Kawai, Masakata
2014-01-01
Cardiac myosin binding protein-C (cMyBP-C) is a cardiac-specific, thick-filament regulatory protein that is differentially phosphorylated at Ser273, Ser282, and Ser302 by various kinases and modulates contraction. In this study, phosphorylation-site-specific effects of cMyBP-C on myocardial contractility and cross-bridge kinetics were studied by sinusoidal analysis in papillary and trabecular muscle fibers isolated from t/t (cMyBP-C-null) mice and in their counterparts in which cMyBP-C contains the ADA (Ala273-Asp282-Ala302), DAD (Asp273-Ala282-Asp302), and SAS (Ser273-Ala282-Ser302) mutations; the results were compared to those from mice expressing the wild-type (WT) transgene on the t/t background. Under standard activating conditions, DAD fibers showed significant decreases in tension (∼50%), stiffness, the fast apparent rate constant 2πc, and its magnitude C, as well as its magnitude H, but an increase in the medium rate constant 2πb, with respect to WT. The t/t fibers showed a smaller drop in stiffness and a significant decrease in 2πc that can be explained by isoform shift of myosin heavy chain. In the pCa-tension study using the 8 mM phosphate (Pi) solution, there was hardly any difference in Ca2+ sensitivity (pCa50) and cooperativity (nH) between the mutant and WT samples. However, in the solutions without Pi, DAD showed increased nH and slightly decreased pCa50. We infer from these observations that the nonphosphorylatable residue 282 combined with phosphomimetic residues Asp273 and/or Asp302 (in DAD) is detrimental to cardiomyocytes by lowering isometric tension and altering cross-bridge kinetics with decreased 2πc and increased 2πb. In contrast, a single change of residue 282 to nonphosphorylatable Ala (SAS), or to phosphomimetic Asps together with the changes of residues 273 and 302 to nonphosphorylatable Ala (ADA) causes minute changes in fiber mechanics. PMID:24606935
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pletneva, Nadya V.; Pletnev, Sergei; Pakhomov, Alexey A.
The fluorescent protein fromDendronephthyasp. (DendFP) is a member of the Kaede-like group of photoconvertible fluorescent proteins with a His62-Tyr63-Gly64 chromophore-forming sequence. Upon irradiation with UV and blue light, the fluorescence of DendFP irreversibly changes from green (506 nm) to red (578 nm). The photoconversion is accompanied by cleavage of the peptide backbone at the C α—N bond of His62 and the formation of a terminal carboxamide group at the preceding Leu61. The resulting double C α=C βbond in His62 extends the conjugation of the chromophore π system to include imidazole, providing the red fluorescence. Here, the three-dimensional structures of nativemore » green and photoconverted red forms of DendFP determined at 1.81 and 2.14 Å resolution, respectively, are reported. This is the first structure of photoconverted red DendFP to be reported to date. The structure-based mutagenesis of DendFP revealed an important role of positions 142 and 193: replacement of the original Ser142 and His193 caused a moderate red shift in the fluorescence and a considerable increase in the photoconversion rate. It was demonstrated that hydrogen bonding of the chromophore to the Gln116 and Ser105 cluster is crucial for variation of the photoconversion rate. The single replacement Gln116Asn disrupts the hydrogen bonding of Gln116 to the chromophore, resulting in a 30-fold decrease in the photoconversion rate, which was partially restored by a further Ser105Asn replacement.« less
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
Organizational safety culture and medical error reporting by Israeli nurses.
Kagan, Ilya; Barnoy, Sivia
2013-09-01
To investigate the association between patient safety culture (PSC) and the incidence and reporting rate of medical errors by Israeli nurses. Self-administered structured questionnaires were distributed to a convenience sample of 247 registered nurses enrolled in training programs at Tel Aviv University (response rate = 91%). The questionnaire's three sections examined the incidence of medication mistakes in clinical practice, the reporting rate for these errors, and the participants' views and perceptions of the safety culture in their workplace at three levels (organizational, departmental, and individual performance). Pearson correlation coefficients, t tests, and multiple regression analysis were used to analyze the data. Most nurses encountered medical errors from a daily to a weekly basis. Six percent of the sample never reported their own errors, while half reported their own errors "rarely or sometimes." The level of PSC was positively and significantly correlated with the error reporting rate. PSC, place of birth, error incidence, and not having an academic nursing degree were significant predictors of error reporting, together explaining 28% of variance. This study confirms the influence of an organizational safety climate on readiness to report errors. Senior healthcare executives and managers can make a major impact on safety culture development by creating and promoting a vision and strategy for quality and safety and fostering their employees' motivation to implement improvement programs at the departmental and individual level. A positive, carefully designed organizational safety culture can encourage error reporting by staff and so improve patient safety. © 2013 Sigma Theta Tau International.
Software for Quantifying and Simulating Microsatellite Genotyping Error
Johnson, Paul C.D.; Haydon, Daniel T.
2007-01-01
Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126
Westbrook, Johanna I; Raban, Magdalena Z; Walter, Scott R; Douglas, Heather
2018-01-09
Interruptions and multitasking have been demonstrated in experimental studies to reduce individuals' task performance. These behaviours are frequently used by clinicians in high-workload, dynamic clinical environments, yet their effects have rarely been studied. To assess the relative contributions of interruptions and multitasking by emergency physicians to prescribing errors. 36 emergency physicians were shadowed over 120 hours. All tasks, interruptions and instances of multitasking were recorded. Physicians' working memory capacity (WMC) and preference for multitasking were assessed using the Operation Span Task (OSPAN) and Inventory of Polychronic Values. Following observation, physicians were asked about their sleep in the previous 24 hours. Prescribing errors were used as a measure of task performance. We performed multivariate analysis of prescribing error rates to determine associations with interruptions and multitasking, also considering physician seniority, age, psychometric measures, workload and sleep. Physicians experienced 7.9 interruptions/hour. 28 clinicians were observed prescribing 239 medication orders which contained 208 prescribing errors. While prescribing, clinicians were interrupted 9.4 times/hour. Error rates increased significantly if physicians were interrupted (rate ratio (RR) 2.82; 95% CI 1.23 to 6.49) or multitasked (RR 1.86; 95% CI 1.35 to 2.56) while prescribing. Having below-average sleep showed a >15-fold increase in clinical error rate (RR 16.44; 95% CI 4.84 to 55.81). WMC was protective against errors; for every 10-point increase on the 75-point OSPAN, a 19% decrease in prescribing errors was observed. There was no effect of polychronicity, workload, physician gender or above-average sleep on error rates. Interruptions, multitasking and poor sleep were associated with significantly increased rates of prescribing errors among emergency physicians. WMC mitigated the negative influence of these factors to an extent. These results confirm experimental findings in other fields and raise questions about the acceptability of the high rates of multitasking and interruption in clinical environments. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers
NASA Technical Reports Server (NTRS)
Ha, Eunho; North, Gerald R.
1995-01-01
Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less
NASA Astrophysics Data System (ADS)
Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.
2017-01-01
Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.
NASA Astrophysics Data System (ADS)
Lin, Xueliang; Ge, Xiaosong; Xu, Zhihong; Zheng, Zuci; Huang, Wei; Hong, Quanxing; Lin, Duo
2016-10-01
The early cancer detection is of great significance to increase the patient's survival rate and reduce the risk of cancer development. Surface enhanced Raman spectroscopy (SERS) technique, a rapid, convenient, nondestructive optical detection method, can provide a characteristic "fingerprint" information of target substances, even achieving single molecule detection. Its ultra-high detection sensitivity has made it become one of the most potential biochemical detection methods. Saliva, a multi-constituent oral fluid, contains the bio-markers which is capable of reflecting the systemic health condition of human, showing promising potential as an effect medium for disease monitoring. Compared with the serum samples, the collection and processing of saliva is safer, more convenient and noninvasive. Thus, saliva test is becoming the hotspot issues of the noninvasive cancer research field. This review highlights and analyzes current application progress within the field of SERS saliva test in cancer detection. Meanwhile, the primary research results of SERS saliva for the noninvasive differentiation of nasopharyngeal cancer, normal and rhinitis obtained by our group are shown.
Kéri, Szabolcs; Juhász, Anna; Rimanóczy, Agnes; Szekeres, György; Kelemen, Oguz; Cimmer, Csongor; Szendi, István; Benedek, György; Janka, Zoltán
2005-06-01
In this study, the authors investigated the relationship between the Ser9Gly (SG) polymorphism of the dopamine D3 receptor (DRD3) and striatal habit learning in healthy controls and patients with schizophrenia. Participants were given the weather prediction task, during which probabilistic cue-response associations were learned for tarot cards and weather outcomes (rain or sunshine). In both healthy controls and patients with schizophrenia, participants with Ser9Ser (SS) genotype did not learn during the early phase of the task (1-50 trials), whereas participants with SG genotype did so. During the late phase of the task (51-100 trials), both participants with SS and SG genotype exhibited significant learning. Learning rate was normal in patients with schizophrenia. These results suggest that the DRD3 variant containing glycine is associated with more efficient striatal habit learning in healthy controls and patients with schizophrenia. (c) 2005 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Yu "Winston"; Yang, Qian; Kang, Soyoung; Wall, Matthew A.; Liu, Jonathan T. C.
2018-04-01
Surface-enhanced Raman scattering (SERS) nanoparticles (NPs) are increasingly being engineered for a variety of disease-detection and treatment applications. For example, we have previously developed a fiber-optic Raman-encoded molecular imaging (REMI) system for spectral imaging of biomarker-targeted SERS NPs topically applied on tissue surfaces to identify residual tumors at surgical margins. Although accurate tumor detection was achieved, the commercial SERS NPs used in our previous studies lacked the signal strength to enable high-speed imaging with high pixel counts (large fields of view and/or high spatial resolution), which limits their use for certain time-constrained clinical applications. As a solution, we explored the use of surface-enhanced resonant Raman scattering (SERRS) NPs to enhance imaging speeds. The SERRS NPs were synthesized de novo, and then conjugated to HER2 antibodies to achieve high binding affinity, as validated by flow cytometry. Under identical tissue-staining and imaging conditions, the targeted SERRS NPs enabled reliable identification of HER2-overexpressed tumor xenografts with 50-fold-enhanced imaging speed compared with our standard targeted SERS NPs. This enables our REMI system to image tissue surfaces at a rate of 150 cm2 per minute at a spatial resolution of 0.5 mm.
Structural and kinetic studies on the Ser101Ala variant of choline oxidase: Catalysis by compromise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finnegan, S.; Orville, A.; Yuan, H.
2010-09-15
The oxidation of choline catalyzed by choline oxidase includes two reductive half-reactions where FAD is reduced by the alcohol substrate and by an aldehyde intermediate transiently formed in the reaction. Each reductive half-reaction is followed by an oxidative half-reaction where the reduced flavin is oxidized by oxygen. Here, we have used mutagenesis to prepare the Ser101Ala mutant of choline oxidase and have investigated the impact of this mutation on the structural and kinetic properties of the enzyme. The crystallographic structure of the Ser101Ala enzyme indicates that the only differences between the mutant and wild-type enzymes are the lack of amore » hydroxyl group on residue 101 and a more planar configuration of the flavin in the mutant enzyme. Kinetics established that replacement of Ser101 with alanine yields a mutant enzyme with increased efficiencies in the oxidative half-reactions and decreased efficiencies in the reductive half-reactions. This is accompanied by a significant decrease in the overall rate of turnover with choline. Thus, this mutation has revealed the importance of a specific residue for the optimization of the overall turnover of choline oxidase, which requires fine-tuning of four consecutive half-reactions for the conversion of an alcohol to a carboxylic acid.« less
NASA Astrophysics Data System (ADS)
Zhang, Haibao; Wang, Jingjing; Wang, Hua; Tian, Xingyou
2017-09-01
In this paper, we presented the fabrication of mace-like gold hollow hierarchical micro/nanostructures (HMNs) grafted on ZnO nanorods array by using an electrochemical deposition in chloroauric acid solution on gold layer pre-coated ZnO nanorods array. Different from general electrochemical deposition process, the catalytic etching to ZnO and electrodeposition of gold are co-existed in our case, which lead to an inner hollow structure and an outer gold shell. Due to the appropriate electrodeposition conditions, the outer gold shell was built of many wimble-like nanoparticles, and the hierarchical micro/nanostructures were thus formed. In addition, because of the deposition rate is decreased gradually away from the top of ZnO nanorods, the final structures show mace-like appearance. The surface-enhanced Raman scattering (SERS) effect of the as-prepared gold hollow HMNs was further studied by using rhodamine 6G as probe molecule. It is demonstrated that these structures show ultrahigh SERS activity, and the detecting low limit of R6G solution can be to 10-10 M on single mace-like gold HMNs, which is quite important for their potential application in SERS-based surface analysis and sensors.
Fujii, Toshihiro; Sakata, Asuka; Nishimura, Satoshi; Eto, Koji; Nagata, Shigekazu
2015-10-13
Phosphatidylserine (PtdSer) exposure on the surface of activated platelets requires the action of a phospholipid scramblase(s), and serves as a scaffold for the assembly of the tenase and prothrombinase complexes involved in blood coagulation. Here, we found that the activation of mouse platelets with thrombin/collagen or Ca(2+) ionophore at 20 °C induces PtdSer exposure without compromising plasma membrane integrity. Among five transmembrane protein 16 (TMEM16) members that support Ca(2+)-dependent phospholipid scrambling, TMEM16F was the only one that showed high expression in mouse platelets. Platelets from platelet-specific TMEM16F-deficient mice exhibited defects in activation-induced PtdSer exposure and microparticle shedding, although α-granule and dense granule release remained intact. The rate of tissue factor-induced thrombin generation by TMEM16F-deficient platelets was severely reduced, whereas thrombin-induced clot retraction was unaffected. The imaging of laser-induced thrombus formation in whole animals showed that PtdSer exposure on aggregated platelets was TMEM16F-dependent in vivo. The phenotypes of the platelet-specific TMEM16F-null mice resemble those of patients with Scott syndrome, a mild bleeding disorder, indicating that these mice may provide a useful model for human Scott syndrome.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-11-01
Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806
Hansen, Heidi; Ben-David, Merav; McDonald, David B
2008-03-01
In noninvasive genetic sampling, when genotyping error rates are high and recapture rates are low, misidentification of individuals can lead to overestimation of population size. Thus, estimating genotyping errors is imperative. Nonetheless, conducting multiple polymerase chain reactions (PCRs) at multiple loci is time-consuming and costly. To address the controversy regarding the minimum number of PCRs required for obtaining a consensus genotype, we compared consumer-style the performance of two genotyping protocols (multiple-tubes and 'comparative method') in respect to genotyping success and error rates. Our results from 48 faecal samples of river otters (Lontra canadensis) collected in Wyoming in 2003, and from blood samples of five captive river otters amplified with four different primers, suggest that use of the comparative genotyping protocol can minimize the number of PCRs per locus. For all but five samples at one locus, the same consensus genotypes were reached with fewer PCRs and with reduced error rates with this protocol compared to the multiple-tubes method. This finding is reassuring because genotyping errors can occur at relatively high rates even in tissues such as blood and hair. In addition, we found that loci that amplify readily and yield consensus genotypes, may still exhibit high error rates (7-32%) and that amplification with different primers resulted in different types and rates of error. Thus, assigning a genotype based on a single PCR for several loci could result in misidentification of individuals. We recommend that programs designed to statistically assign consensus genotypes should be modified to allow the different treatment of heterozygotes and homozygotes intrinsic to the comparative method. © 2007 The Authors.
National suicide rates a century after Durkheim: do we know enough to estimate error?
Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W
2010-06-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.
Does Mckuer's Law Hold for Heart Rate Control via Biofeedback Display?
NASA Technical Reports Server (NTRS)
Courter, B. J.; Jex, H. R.
1984-01-01
Some persons can control their pulse rate with the aid of a biofeedback display. If the biofeedback display is modified to show the error between a command pulse-rate and the measured rate, a compensatory (error correcting) heart rate tracking control loop can be created. The dynamic response characteristics of this control loop when subjected to step and quasi-random disturbances were measured. The control loop includes a beat-to-beat cardiotachmeter differenced with a forcing function from a quasi-random input generator; the resulting error pulse-rate is displayed as feedback. The subject acts to null the displayed pulse-rate error, thereby closing a compensatory control loop. McRuer's Law should hold for this case. A few subjects already skilled in voluntary pulse-rate control were tested for heart-rate control response. Control-law properties are derived, such as: crossover frequency, stability margins, and closed-loop bandwidth. These are evaluated for a range of forcing functions and for step as well as random disturbances.
Online automatic tuning and control for fed-batch cultivation
van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.
2007-01-01
Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554
Total Dose Effects on Error Rates in Linear Bipolar Systems
NASA Technical Reports Server (NTRS)
Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent
2007-01-01
The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
Järnström, H; Saarela, K; Kalliokoski, P; Pasanen, A-L
2008-04-01
Emission rates of volatile organic compounds (VOCs) and ammonia measured from six PVC materials and four adhesives in the laboratory were compared to the emission rates measured on site from complete structures. Significantly higher specific emission rates (SERs) were generally measured from the complete structures than from individual materials. There were large differences between different PVC materials in their permeability for VOCs originating from the underlying structure. Glycol ethers and esters from adhesives used in the installation contributed to the emissions from the PVC covered structure. Emissions of 2-ethylhexanol and TXIB (2,2,4-trimethyl-1,3-pentanediol diisobutyrate) were common. High ammonia SERs were measured from single adhesives but their contribution to the emissions from the complete structure did not appear as obvious as for VOCs. The results indicate that three factors affected the VOC emissions from the PVC flooring on a structure: 1) the permeability of the PVC product for VOCs, 2) the VOC emission from the adhesive used, and 3) the VOC emission from the backside of the PVC product.
Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I
Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions regarding the direction of change for error type proportions. The current findings argued for an alternative concept of the role of activation and decay in influencing types of serial-order sound errors. Rather than a slow activation decay rate (Dell, 1986), the results of the current study were more compatible with an alternative explanation of rapid activation decay or slow build-up of residual activation.
Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Wolff, David B.
2009-01-01
Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.
Dong, Ying; Wang, Xiaohua; Ye, Xiaofeng; Wang, Guanhua; Li, Yan; Wang, Ningju; Yang, Yinxue; Chen, Zhiqiang; Yang, Wenjun
2015-10-01
Human p21 gene is characterized by a polymorphism at codon 31 leading to a Serine-to- Arginine (S/R), two different alleles of p21 Ser31Arg (rs 1801270) polymorphism have been shown to differ significantly in their transcriptional efficiency. More and more investigations are now being carried out to examine a possible link between the p21 Ser31Arg polymorphism and cancer. However, the results were inconclusive. Therefore, we carried out a systematic review and meta-analysis to examine whether this polymorphism is associated with gastrointestinal tract tumor in Asian. Seven studies (n = 2690), comprising 967 cases and 1723 controls in Asian population, were included in our study. The meta-analysis showed significant association between Ser-allele or Ser/Ser genotype and the susceptibility to gastrointestinal tract tumor in overall studies (Ser-allele vs. Arg-allele: OR = 1.17, 95% CI: 1.04-1.31; Ser/Ser vs. Arg/Arg: OR = 1.38, 95% CI: 1.09-1.75; Ser/Ser vs. Arg/Ser: OR = 1.27, 95% CI: 1.05-1.53; Ser/Ser vs. Arg/Ser + Arg/Arg: OR = 1.29, 95% CI: 1.07-1.54). Despite the limitations, the results of the present meta-analysis suggested that, in the p21 Ser31Arg polymorphism, Ser-allele and Ser/Ser genotype might be risk factors for gastrointestinal tract tumor in Asian populations. © The Author(s) 2014.
Zhong, D Y; Chu, H Y; Wang, M L; Ma, L; Shi, D N; Zhang, Z D
2012-09-26
The functional polymorphism Ser326Cys (rs1052133) in the human 8-oxoguanine DNA glycosylase (hOGG1) gene has been implicated in bladder cancer risk. However, reports of this association between the Ser326Cys polymorphism and bladder cancer risk are conflicting. In order to help clarify this relationship, we made a meta-analysis of seven case-control studies, summing 2521 cases and 2408 controls. We used odds ratios (ORs) with 95% confidence intervals (95%CIs) to assess the strength of the association. Overall, no significant association between the hOGG1 Ser326Cys polymorphism and bladder cancer risk was found for Cys/Cys vs Ser/Ser (OR = 1.10, 95%CI = 0.74-1.65), Ser/Cys vs Ser/Ser (OR = 1.07, 95%CI = 0.81-1.42), Cys/Cys + Ser/Cys vs Ser/Ser (OR = 1.08, 95%CI = 0.87-1.33), and Cys/Cys vs Ser/Cys + Ser/Ser (OR = 1.04, 95%CI = 0.65-1.69). Even when stratified by ethnicity, no significant association was observed. We concluded that the hOGG1 Ser326Cys polymorphism does not contribute to susceptibility to bladder cancer.
Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C
2017-02-15
Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
Yang, Haiyan; Duan, Guangcai; Zhu, Jingyuan; Zhang, Weidong; Xi, Yuanlin; Fan, Qingtang
2013-08-01
A total of 293 Shigella isolates were isolated from patients with diarrhoea in four villages of Henan, China. This study investigated the prevalence of the plasmid-mediated quinolone resistance (PMQR) genes qnrA, qnrB, qnrS, qepA and aac(6')-Ib-cr and compared the polymorphic quinolone resistance-determining regions (QRDRs) of gyrA, gyrB, parC and parE. Of the isolates, 292 were found to be resistant to nalidixic acid and pipemidic acid, whereas 77 were resistant to ciprofloxacin (resistance rate of 26.3%). Resistance of the Shigella isolates to ciprofloxacin significantly increased from 2001 to 2008 (P<0.05). A mutation in gyrA was present in 277 (94.5%) of the isolates and a mutation in parC was present in 19 (6.5%) of the isolates. Moreover, 168 (57.3%) of the isolates contained only the gyrA (Ser83Leu) mutation. In addition, 107 isolates had two gyrA point mutations (Ser83Leu and either Asp87Gly, Asp87Asn or Asp113Tyr) and 13 isolates had two gyrA point mutations (Ser83Leu and Asp87Gly or Gly214Ala) and one parC mutation (Ser80Ile). In addition, qepA and aac(6')-Ib-cr were present in 6 (2.05%) and 19 (6.48%) of the isolates, respectively. All but one of the PMQR-positive isolates with a ciprofloxacin minimum inhibitory concentration in the range 4-32μg/mL had a mutation in the QRDR. It is known that PMQR-positive Shigella isolates are common in China. This study found that there was a significant increase in mutation rates of the QRDR and the resistant rates to ciprofloxacin. Other mechanisms may be present in the isolates that also contribute to their resistance to ciprofloxacin. Copyright © 2013 Elsevier B.V. and the International Society of Chemotherapy. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbee, D; McCarthy, A; Galavis, P
Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less
Bit-error rate for free-space adaptive optics laser communications.
Tyson, Robert K
2002-04-01
An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.
Sultana, Shemaila; Solotchi, Mihai; Ramachandran, Aparna; Patel, Smita S
2017-11-03
Single-subunit RNA polymerases (RNAPs) are present in phage T7 and in mitochondria of all eukaryotes. This RNAP class plays important roles in biotechnology and cellular energy production, but we know little about its fidelity and error rates. Herein, we report the error rates of three single-subunit RNAPs measured from the catalytic efficiencies of correct and all possible incorrect nucleotides. The average error rates of T7 RNAP (2 × 10 -6 ), yeast mitochondrial Rpo41 (6 × 10 -6 ), and human mitochondrial POLRMT (RNA polymerase mitochondrial) (2 × 10 -5 ) indicate high accuracy/fidelity of RNA synthesis resembling those of replicative DNA polymerases. All three RNAPs exhibit a distinctly high propensity for GTP misincorporation opposite dT, predicting frequent A→G errors in RNA with rates of ∼10 -4 The A→C, G→A, A→U, C→U, G→U, U→C, and U→G errors mostly due to pyrimidine-purine mismatches were relatively frequent (10 -5 -10 -6 ), whereas C→G, U→A, G→C, and C→A errors from purine-purine and pyrimidine-pyrimidine mismatches were rare (10 -7 -10 -10 ). POLRMT also shows a high C→A error rate on 8-oxo-dG templates (∼10 -4 ). Strikingly, POLRMT shows a high mutagenic bypass rate, which is exacerbated by TEFM (transcription elongation factor mitochondrial). The lifetime of POLRMT on terminally mismatched elongation substrate is increased in the presence of TEFM, which allows POLRMT to efficiently bypass the error and continue with transcription. This investigation of nucleotide selectivity on normal and oxidatively damaged DNA by three single-subunit RNAPs provides the basic information to understand the error rates in mitochondria and, in the case of T7 RNAP, to assess the quality of in vitro transcribed RNAs. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
Tomada, I; Negrão, R; Almeida, H; Neves, D
2014-04-01
Long-term consumption of high-fat diets negatively interferes with metabolic status and promotes endothelial dysfunction and inflammation. In the cavernous tissue, these outcomes become conspicuous in the elderly and strongly affect penile erection, a vascular process highly dependent on local nitric oxide bioavailability. Although epidemiological data links erectile dysfunction to nutritional patterns, the underlying molecular mechanisms remain unclear. Therefore, we investigated the effects of long-term high-fat diet on endothelial nitric oxide synthase (eNOS)-Sirtuin-1 axis and Akt/eNOS phosphorylation in the cavernous tissue of Sprague-Dawley rats, and compared with energy-restricted animals. We demonstrated that high-fat diet intake led to a noteworthy decrease in eNOS phosphorylation at Ser1177 residue through the Akt pathway, which seems to be compensated by upregulation of phosphorylation at Ser615, but without an increment in nitric oxide production. These results are accompanied by an increase of systemic inflammatory markers and upregulation of the inducible NOS and of the deacetylase Sirtuin-1 in the cavernous tissue to levels apparently detrimental to cells and to metabolic homeostasis. Conversely, in long-term energy-restricted animals, the rate of phosphorylation of eNOS at Ser1177 diminished, but the activation of the enzyme increased through phosphorylation of eNOS at Ser615, resulting in an enhancement in nitric oxide bioavailability. Taken together, our results demonstrate that long-term nutritional conditions override the influence of age on the eNOS expression and activation in rat cavernous tissue.
Hu, Yaxi; Lu, Xiaonan
2016-05-01
An innovative "one-step" sensor conjugating molecularly imprinted polymers and surface enhanced Raman spectroscopic-active substrate (MIPs-SERS) was investigated for simultaneous extraction and determination of melamine in tap water and milk. This sensor was fabricated by integrating silver nanoparticles (AgNPs) with MIPs synthesized by bulk polymerization of melamine (template), methacrylic acid (functional monomer), ethylene glycol dimethacrylate (cross-linking agent), and 2,2'-azobisisobutyronitrile (initiator). Static and kinetic adsorption tests validated the specific affinity of MIPs-AgNPs to melamine and the rapid adsorption equilibration rate. Principal component analysis segregated SERS spectral features of tap water and milk samples with different melamine concentrations. Partial least squares regression models correlated melamine concentrations in tap water and skim milk with SERS spectral features. The limit of detection (LOD) and limit of quantification (LOQ) of melamine in tap water were determined as 0.0019 and 0.0064 mmol/L, while the LOD and LOQ were 0.0165 and 0.055 mmol/L for the determination of melamine in skim milk. However, this sensor is not ideal to quantify melamine in tap water and skim milk. By conjugating MIPs with SERS-active substrate (that is, AgNPs), reproducibility of SERS spectral features was increased, resulting in more accurate detection. The time required to determine melamine in tap water and milk were 6 and 25 min, respectively. The low LOD, LOQ, and rapid detection confirm the potential of applying this sensor for accurate and high-throughput detection of melamine in tap water and milk. © 2016 Institute of Food Technologists®
Belostotsky, Ruth; Ben-Shalom, Efrat; Rinat, Choni; Becker-Cohen, Rachel; Feinstein, Sofia; Zeligson, Sharon; Segel, Reeval; Elpeleg, Orly; Nassar, Suheir; Frishberg, Yaacov
2011-02-11
An uncharacterized multisystemic mitochondrial cytopathy was diagnosed in two infants from consanguineous Palestinian kindred living in a single village. The most significant clinical findings were tubulopathy (hyperuricemia, metabolic alkalosis), pulmonary hypertension, and progressive renal failure in infancy (HUPRA syndrome). Analysis of the consanguineous pedigree suggested that the causative mutation is in the nuclear DNA. By using genome-wide SNP homozygosity analysis, we identified a homozygous identity-by-descent region on chromosome 19 and detected the pathogenic mutation c.1169A>G (p.Asp390Gly) in SARS2, encoding the mitochondrial seryl-tRNA synthetase. The same homozygous mutation was later identified in a third infant with HUPRA syndrome. The carrier rate of this mutation among inhabitants of this Palestinian isolate was found to be 1:15. The mature enzyme catalyzes the ligation of serine to two mitochondrial tRNA isoacceptors: tRNA(Ser)(AGY) and tRNA(Ser)(UCN). Analysis of amino acylation of the two target tRNAs, extracted from immortalized peripheral lymphocytes derived from two patients, revealed that the p.Asp390Gly mutation significantly impacts on the acylation of tRNA(Ser)(AGY) but probably not that of tRNA(Ser)(UCN). Marked decrease in the expression of the nonacylated transcript and the complete absence of the acylated tRNA(Ser)(AGY) suggest that this mutation leads to significant loss of function and that the uncharged transcripts undergo degradation. Copyright © 2011 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Gilmartin-Thomas, Julia Fiona-Maree; Smith, Felicity; Wolfe, Rory; Jani, Yogini
2017-07-01
No published study has been specifically designed to compare medication administration errors between original medication packaging and multi-compartment compliance aids in care homes, using direct observation. Compare the effect of original medication packaging and multi-compartment compliance aids on medication administration accuracy. Prospective observational. Ten Greater London care homes. Nurses and carers administering medications. Between October 2014 and June 2015, a pharmacist researcher directly observed solid, orally administered medications in tablet or capsule form at ten purposively sampled care homes (five only used original medication packaging and five used both multi-compartment compliance aids and original medication packaging). The medication administration error rate was calculated as the number of observed doses administered (or omitted) in error according to medication administration records, compared to the opportunities for error (total number of observed doses plus omitted doses). Over 108.4h, 41 different staff (35 nurses, 6 carers) were observed to administer medications to 823 residents during 90 medication administration rounds. A total of 2452 medication doses were observed (1385 from original medication packaging, 1067 from multi-compartment compliance aids). One hundred and seventy eight medication administration errors were identified from 2493 opportunities for error (7.1% overall medication administration error rate). A greater medication administration error rate was seen for original medication packaging than multi-compartment compliance aids (9.3% and 3.1% respectively, risk ratio (RR)=3.9, 95% confidence interval (CI) 2.4 to 6.1, p<0.001). Similar differences existed when comparing medication administration error rates between original medication packaging (from original medication packaging-only care homes) and multi-compartment compliance aids (RR=2.3, 95%CI 1.1 to 4.9, p=0.03), and between original medication packaging and multi-compartment compliance aids within care homes that used a combination of both medication administration systems (RR=4.3, 95%CI 2.7 to 6.8, p<0.001). A significant difference in error rate was not observed between use of a single or combination medication administration system (p=0.44). The significant difference in, and high overall, medication administration error rate between original medication packaging and multi-compartment compliance aids supports the use of the latter in care homes, as well as local investigation of tablet and capsule impact on medication administration errors and staff training to prevent errors occurring. As a significant difference in error rate was not observed between use of a single or combination medication administration system, common practice of using both multi-compartment compliance aids (for most medications) and original packaging (for medications with stability issues) is supported. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calabrese, Edward J., E-mail: edwardc@schoolph.uma
This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. •more » The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.« less
Fanning, Laura; Jones, Nick; Manias, Elizabeth
2016-04-01
The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.
Teamwork and clinical error reporting among nurses in Korean hospitals.
Hwang, Jee-In; Ahn, Jeonghoon
2015-03-01
To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitoring, mutual support, and communication. Using logistic regression analysis, we determined the relationships between teamwork and error reporting. The response rate was 85.5%. The mean score of teamwork was 3.5 out of 5. At the subscale level, mutual support was rated highest, while leadership was rated lowest. Of the participating nurses, 522 responded that they had experienced at least one clinical error in the last 6 months. Among those, only 53.0% responded that they always or usually reported clinical errors to their managers and/or the patient safety department. Teamwork was significantly associated with better error reporting. Specifically, nurses with a higher team communication score were more likely to report clinical errors to their managers and the patient safety department (odds ratio = 1.82, 95% confidence intervals [1.05, 3.14]). Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety. Copyright © 2015. Published by Elsevier B.V.
Grassie, Michael E; Sutherland, Cindy; Ulke-Lemée, Annegret; Chappellaz, Mona; Kiss, Enikö; Walsh, Michael P; MacDonald, Justin A
2012-10-19
Ca(2+) sensitization of smooth muscle contraction depends upon the activities of protein kinases, including Rho-associated kinase, that phosphorylate the myosin phosphatase targeting subunit (MYPT1) at Thr(697) and/or Thr(855) (rat sequence numbering) to inhibit phosphatase activity and increase contractile force. Both Thr residues are preceded by the sequence RRS, and it has been suggested that phosphorylation at Ser(696) prevents phosphorylation at Thr(697). However, the effects of Ser(854) and dual Ser(696)-Thr(697) and Ser(854)-Thr(855) phosphorylations on myosin phosphatase activity and contraction are unknown. We characterized a suite of MYPT1 proteins and phosphospecific antibodies for specificity toward monophosphorylation events (Ser(696), Thr(697), Ser(854), and Thr(855)), Ser phosphorylation events (Ser(696)/Ser(854)) and dual Ser/Thr phosphorylation events (Ser(696)-Thr(697) and Ser(854)-Thr(855)). Dual phosphorylation at Ser(696)-Thr(697) and Ser(854)-Thr(855) by cyclic nucleotide-dependent protein kinases had no effect on myosin phosphatase activity, whereas phosphorylation at Thr(697) and Thr(855) by Rho-associated kinase inhibited phosphatase activity and prevented phosphorylation by cAMP-dependent protein kinase at the neighboring Ser residues. Forskolin induced phosphorylation at Ser(696), Thr(697), Ser(854), and Thr(855) in rat caudal artery, whereas U46619 induced Thr(697) and Thr(855) phosphorylation and prevented the Ser phosphorylation induced by forskolin. Furthermore, pretreatment with forskolin prevented U46619-induced Thr phosphorylations. We conclude that cross-talk between cyclic nucleotide and RhoA signaling pathways dictates the phosphorylation status of the Ser(696)-Thr(697) and Ser(854)-Thr(855) inhibitory regions of MYPT1 in situ, thereby regulating the activity of myosin phosphatase and contraction.
Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions
ERIC Educational Resources Information Center
Yormaz, Seha; Sünbül, Önder
2017-01-01
This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…
Vairy, Stephanie; Corny, Jennifer; Jamoulle, Olivier; Levy, Arielle; Lebel, Denis; Carceller, Ana
2017-12-01
A high rate of prescription errors exists in pediatric teaching hospitals, especially during initial training. To determine the effectiveness of a two-hour lecture by a pharmacist on rates of prescription errors and quality of prescriptions. A two-hour lecture led by a pharmacist was provided to 11 junior pediatric residents (PGY-1) as part of a one-month immersion program. A control group included 15 residents without the intervention. We reviewed charts to analyze the first 50 prescriptions of each resident. Data were collected from 1300 prescriptions involving 451 patients, 550 in the intervention group and 750 in the control group. The rate of prescription errors in the intervention group was 9.6% compared to 11.3% in the control group (p=0.32), affecting 106 patients. Statistically significant differences between both groups were prescriptions with unwritten doses (p=0.01) and errors involving overdosing (p=0.04). We identified many errors as well as issues surrounding quality of prescriptions. We found a 10.6% prescription error rate. This two-hour lecture seems insufficient to reduce prescription errors among junior pediatric residents. This study highlights the most frequent types of errors and prescription quality issues that should be targeted by future educational interventions.
Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark
2012-11-01
To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.
Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O
2016-11-01
Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.
Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.
2015-12-01
Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.
Reducing the Familiarity of Conjunction Lures with Pictures
ERIC Educational Resources Information Center
Lloyd, Marianne E.
2013-01-01
Four experiments were conducted to test whether conjunction errors were reduced after pictorial encoding and whether the semantic overlap between study and conjunction items would impact error rates. Across 4 experiments, compound words studied with a single-picture had lower conjunction error rates during a recognition test than those words…
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... rates, which is defined as the percentage of cases with an error (expressed as the total number of cases with an error compared to the total number of cases); the percentage of cases with an improper payment...
Certification of ICI 1012 optical data storage tape
NASA Technical Reports Server (NTRS)
Howell, J. M.
1993-01-01
ICI has developed a unique and novel method of certifying a Terabyte optical tape. The tape quality is guaranteed as a statistical upper limit on the probability of uncorrectable errors. This is called the Corrected Byte Error Rate or CBER. We developed this probabilistic method because of two reasons why error rate cannot be measured directly. Firstly, written data is indelible, so one cannot employ write/read tests such as used for magnetic tape. Secondly, the anticipated error rates need impractically large samples to measure accurately. For example, a rate of 1E-12 implies only one byte in error per tape. The archivability of ICI 1012 Data Storage Tape in general is well characterized and understood. Nevertheless, customers expect performance guarantees to be supported by test results on individual tapes. In particular, they need assurance that data is retrievable after decades in archive. This paper describes the mathematical basis, measurement apparatus and applicability of the certification method.
The dependence of crowding on flanker complexity and target-flanker similarity
Bernard, Jean-Baptiste; Chung, Susana T.L.
2013-01-01
We examined the effects of the spatial complexity of flankers and target-flanker similarity on the performance of identifying crowded letters. On each trial, observers identified the middle character of random strings of three characters (“trigrams”) briefly presented at 10° below fixation. We tested the 26 lowercase letters of the Times-Roman and Courier fonts, a set of 79 characters (letters and non-letters) of the Times-Roman font, and the uppercase letters of two highly complex ornamental fonts, Edwardian and Aristocrat. Spatial complexity of characters was quantified by the length of the morphological skeleton of each character, and target-flanker similarity was defined based on a psychometric similarity matrix. Our results showed that (1) letter identification error rate increases with flanker complexity up to a certain value, beyond which error rate becomes independent of flanker complexity; (2) the increase of error rate is slower for high-complexity target letters; (3) error rate increases with target-flanker similarity; and (4) mislocation error rate increases with target-flanker similarity. These findings, combined with the current understanding of the faulty feature integration account of crowding, provide some constraints of how the feature integration process could cause perceptual errors. PMID:21730225
Total energy based flight control system
NASA Technical Reports Server (NTRS)
Lambregts, Antonius A. (Inventor)
1985-01-01
An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.
Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A
2017-03-01
Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.
Au@Ag core/shell cuboids and dumbbells: Optical properties and SERS response
NASA Astrophysics Data System (ADS)
Khlebtsov, Boris N.; Liu, Zhonghui; Ye, Jian; Khlebtsov, Nikolai G.
2015-12-01
Recent studies have conclusively shown that the plasmonic properties of Au nanorods can be finely controlled by Ag coating. Here, we investigate the effect of asymmetric silver overgrowth of Au nanorods on their extinction and surface-enhanced Raman scattering (SERS) properties for colloids and self-assembled monolayers. Au@Ag core/shell cuboids and dumbbells were fabricated through a seed-mediated anisotropic growth process, in which AgCl was reduced by use of Au nanorods with narrow size and shape distribution as seeds. Upon tailoring the reaction rate, monodisperse cuboids and dumbbells were synthesized and further transformed into water-soluble powders of PEGylated nanoparticles. The extinction spectra of AuNRs were in excellent agreement with T-matrix simulations based on size and shape distributions of randomly oriented particles. The multimodal plasmonic properties of the Au@Ag cuboids and dumbbells were investigated by comparing the experimental extinction spectra with finite-difference time-domain (FDTD) simulations. The SERS efficiencies of the Au@Ag cuboids and dumbbells were compared in two options: (1) individual SERS enhancers in colloids and (2) self-assembled monolayers formed on a silicon wafer by drop casting of nanopowder solutions mixed with a drop of Raman reporters. By using 1,4-aminothiophenol Raman reporter molecules, the analytical SERS enhancement factor (AEF) of the colloidal dumbbells was determined to be 5.1×106, which is an order of magnitude higher than the AEF=4.0×105 for the cuboids. This difference can be explained by better fitting of the dumbbell plasmon resonance to the excitation laser wavelength. In contrast to the colloidal measurements, the AEF=5×107 of self-assembled cuboid monolayers was almost twofold higher than that for dumbbell monolayers, as determined with rhodamine 6G Raman reporters. According to TEM data and electromagnetic simulations, the better SERS response of the self-assembled cuboids is due to uniform packing and more efficient generation of electromagnetic hot spots, as compared to the dumbbell monolayers.
Mathes, Tim; Klaßen, Pauline; Pieper, Dawid
2017-11-28
Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel
2014-01-01
Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.
Online Error Reporting for Managing Quality Control Within Radiology.
Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L
2016-06-01
Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate.
Rout, Saroj; Sonkusale, Sameer
2016-06-27
The ever increasing demand for bandwidth in wireless communication systems will inevitably lead to the extension of operating frequencies toward the terahertz (THz) band known as the 'THz gap'. Towards closing this gap, we present a multi-level amplitude shift keying (ASK) terahertz wireless communication system using terahertz spatial light modulators (SLM) instead of traditional voltage mode modulation, achieving higher spectral efficiency for high speed communication. The fundamental principle behind this higher efficiency is the conversion of a noisy voltage domain signal to a noise-free binary spatial pattern for effective amplitude modulation of a free-space THz carrier wave. Spatial modulation is achieved using an an active metamaterial array embedded with pseudomorphic high-electron mobility (pHEMT) designed in a consumer-grade galium-arsenide (GaAs) integrated circuit process which enables electronic control of its THz transmissivity. Each array is assembled as individually controllable tiles for transmissive terahertz spatial modulation. Using the experimental data from our metamaterial based modulator, we show that a four-level ASK digital communication system has two orders of magnitude improvement in symbol error rate (SER) for a degradation of 20 dB in transmit signal-to-noise ratio (SNR) using spatial light modulation compared to voltage controlled modulation.
High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link
NASA Technical Reports Server (NTRS)
Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli
2016-01-01
We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
A long-term follow-up evaluation of electronic health record prescribing safety
Abramson, Erika L; Malhotra, Sameer; Osorio, S Nena; Edwards, Alison; Cheriff, Adam; Cole, Curtis; Kaushal, Rainu
2013-01-01
Objective To be eligible for incentives through the Electronic Health Record (EHR) Incentive Program, many providers using older or locally developed EHRs will be transitioning to new, commercial EHRs. We previously evaluated prescribing errors made by providers in the first year following transition from a locally developed EHR with minimal prescribing clinical decision support (CDS) to a commercial EHR with robust CDS. Following system refinements, we conducted this study to assess the rates and types of errors 2 years after transition and determine the evolution of errors. Materials and methods We conducted a mixed methods cross-sectional case study of 16 physicians at an academic-affiliated ambulatory clinic from April to June 2010. We utilized standardized prescription and chart review to identify errors. Fourteen providers also participated in interviews. Results We analyzed 1905 prescriptions. The overall prescribing error rate was 3.8 per 100 prescriptions (95% CI 2.8 to 5.1). Error rates were significantly lower 2 years after transition (p<0.001 compared to pre-implementation, 12 weeks and 1 year after transition). Rates of near misses remained unchanged. Providers positively appreciated most system refinements, particularly reduced alert firing. Discussion Our study suggests that over time and with system refinements, use of a commercial EHR with advanced CDS can lead to low prescribing error rates, although more serious errors may require targeted interventions to eliminate them. Reducing alert firing frequency appears particularly important. Our results provide support for federal efforts promoting meaningful use of EHRs. Conclusions Ongoing error monitoring can allow CDS to be optimally tailored and help achieve maximal safety benefits. Clinical Trials Registration ClinicalTrials.gov, Identifier: NCT00603070. PMID:23578816
Parkin is activated by PINK1-dependent phosphorylation of ubiquitin at Ser65
Kazlauskaite, Agne; Kondapalli, Chandana; Gourlay, Robert; Campbell, David G.; Ritorto, Maria Stella; Hofmann, Kay; Alessi, Dario R.; Knebel, Axel; Trost, Matthias; Muqit, Miratul M. K.
2014-01-01
We have previously reported that the Parkinson's disease-associated kinase PINK1 (PTEN-induced putative kinase 1) is activated by mitochondrial depolarization and stimulates the Parkin E3 ligase by phosphorylating Ser65 within its Ubl (ubiquitin-like) domain. Using phosphoproteomic analysis, we identified a novel ubiquitin phosphopeptide phosphorylated at Ser65 that was enriched 14-fold in HEK (human embryonic kidney)-293 cells overexpressing wild-type PINK1 stimulated with the mitochondrial uncoupling agent CCCP (carbonyl cyanide m-chlorophenylhydrazone), to activate PINK1, compared with cells expressing kinase-inactive PINK1. Ser65 in ubiquitin lies in a similar motif to Ser65 in the Ubl domain of Parkin. Remarkably, PINK1 directly phosphorylates Ser65 of ubiquitin in vitro. We undertook a series of experiments that provide striking evidence that Ser65-phosphorylated ubiquitin (ubiquitinPhospho−Ser65) functions as a critical activator of Parkin. First, we demonstrate that a fragment of Parkin lacking the Ubl domain encompassing Ser65 (ΔUbl-Parkin) is robustly activated by ubiquitinPhospho−Ser65, but not by non-phosphorylated ubiquitin. Secondly, we find that the isolated Parkin Ubl domain phosphorylated at Ser65 (UblPhospho−Ser65) can also activate ΔUbl-Parkin similarly to ubiquitinPhospho−Ser65. Thirdly, we establish that ubiquitinPhospho−Ser65, but not non-phosphorylated ubiquitin or UblPhospho−Ser65, activates full-length wild-type Parkin as well as the non-phosphorylatable S65A Parkin mutant. Fourthly, we provide evidence that optimal activation of full-length Parkin E3 ligase is dependent on PINK1-mediated phosphorylation of both Parkin at Ser65 and ubiquitin at Ser65, since only mutation of both proteins at Ser65 completely abolishes Parkin activation. In conclusion, the findings of the present study reveal that PINK1 controls Parkin E3 ligase activity not only by phosphorylating Parkin at Ser65, but also by phosphorylating ubiquitin at Ser65. We propose that phosphorylation of Parkin at Ser65 serves to prime the E3 ligase enzyme for activation by ubiquitinPhospho−Ser65, suggesting that small molecules that mimic ubiquitinPhospho−Ser65 could hold promise as novel therapies for Parkinson's disease. PMID:24660806
Prevalence and cost of hospital medical errors in the general and elderly United States populations.
Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S
2013-12-01
The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.
The effectiveness of risk management program on pediatric nurses' medication error.
Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat
2013-09-01
Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P < 0.001) and the error-reporting rate was higher (P < 0.007) compared to before the intervention and also in comparison to the nurses of the control hospital. Based on the results of this study and taking into account the high-risk nature of the medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.
Automatic Strain-Rate Controller,
1976-12-01
D—AO37 9~e2 ROME AIR DEVELOPMENT CENTER GRIFFISS AFB N 1’ FIG 13/ 6AUTOMATIC STRAIN—RATE CONTROLLER, (U) DEC 76 R L HUNTSINGER. J A ADAMSK I...goes to zero. CONTROLLER, Leeds and Northrup Series 80 CAT with proportional band , rate , reset, and approach controls . Input from deviation output...8) through ( 16) . (8) Move the set-point slowl y up to 3 or 4. (9) If the recorder po inter hunts , adjust the func t ion controls on tine Ser
Rate, causes and reporting of medication errors in Jordan: nurses' perspectives.
Mrayyan, Majd T; Shishani, Kawkab; Al-Faouri, Ibrahim
2007-09-01
The aim of the study was to describe Jordanian nurses' perceptions about various issues related to medication errors. This is the first nursing study about medication errors in Jordan. This was a descriptive study. A convenient sample of 799 nurses from 24 hospitals was obtained. Descriptive and inferential statistics were used for data analysis. Over the course of their nursing career, the average number of recalled committed medication errors per nurse was 2.2. Using incident reports, the rate of medication errors reported to nurse managers was 42.1%. Medication errors occurred mainly when medication labels/packaging were of poor quality or damaged. Nurses failed to report medication errors because they were afraid that they might be subjected to disciplinary actions or even lose their jobs. In the stepwise regression model, gender was the only predictor of medication errors in Jordan. Strategies to reduce or eliminate medication errors are required.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Error analysis of high-rate GNSS precise point positioning for seismic wave measurement
NASA Astrophysics Data System (ADS)
Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan
2017-06-01
High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.
Douglas, Julie A.; Skol, Andrew D.; Boehnke, Michael
2002-01-01
Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel’s laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele–error model, we find that detection rates are 51%–77% for multiallelic markers and 13%–75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree—rather than genotyping—errors in the early stages of a genome scan. Such early assessments are valuable in either the targeting of families for resampling or discontinued genotyping. PMID:11791214
Bakker, Marjan; Wicherts, Jelte M
2014-09-01
In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Durand, Casey P
2013-01-01
Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
An Evaluation of Commercial Pedometers for Monitoring Slow Walking Speed Populations.
Beevi, Femina H A; Miranda, Jorge; Pedersen, Christian F; Wagner, Stefan
2016-05-01
Pedometers are considered desirable devices for monitoring physical activity. Two population groups of interest include patients having undergone surgery in the lower extremities or who are otherwise weakened through disease, medical treatment, or surgery procedures, as well as the slow walking senior population. For these population groups, pedometers must be able to perform reliably and accurately at slow walking speeds. The objectives of this study were to evaluate the step count accuracy of three commercially available pedometers, the Yamax (Tokyo, Japan) Digi-Walker(®) SW-200 (YM), the Omron (Kyoto, Japan) HJ-720 (OM), and the Fitbit (San Francisco, CA) Zip (FB), at slow walking speeds, specifically at 1, 2, and 3 km/h, and to raise awareness of the necessity of focusing research on step-counting devices and algorithms for slow walking populations. Fourteen participants 29.93 ±4.93 years of age were requested to walk on a treadmill at the three specified speeds, in four trials of 100 steps each. The devices were worn by the participants on the waist belt. The pedometer counts were recorded, and the error percentage was calculated. The error rate of all three evaluated pedometers decreased with the increase of speed: at 1 km/h the error rates varied from 87.11% (YM) to 95.98% (FB), at 2 km/h the error rates varied from 17.27% (FB) to 46.46% (YM), and at 3 km/h the error rates varied from 22.46% (YM) to a slight overcount of 0.70% (FB). It was observed that all the evaluated devices have high error rates at 1 km/h and mixed error rates at 2 km/h, and at 3 km/h the error rates are the smallest of the three assessed speeds, with the OM and the FB having a slight overcount. These results show that research on pedometers' software and hardware should focus more on accurate step detection at slow walking speeds.
Operando plasmon-enhanced Raman spectroscopy in silicon anodes for Li-ion battery
NASA Astrophysics Data System (ADS)
Miroshnikov, Yana; Zitoun, David
2017-11-01
Silicon, an attractive candidate for high-energy lithium-ion batteries (LIBs), displays an alloying mechanism with lithium and presents several unique characteristics which make it an interesting scientific topic and also a technological challenge. In situ local probe measurements have been recently developed to understand the lithiation process and propose an effective remedy to the failure mechanisms. One of the most specific techniques, which is able to follow the phase changes in poorly crystallized electrode materials, makes use of Raman spectroscopy within the battery, i.e., in operando mode. Such an approach has been successful but is still limited by the rather signal-to-noise ratio of the spectroscopy. Herein, the operando Raman signal from the silicon anodes is enhanced by plasmonic nanoparticles following the known surface-enhanced Raman spectroscopy (SERS). Coinage metals (Ag and Au) display a surface plasmon resonance in the visible and allow the SERS effect to take place. We have found that the as-prepared materials reach high specific capacities over 1000 mAh/g with stability over more than 1000 cycles at 1C rate and can be suitable to perform as anodes in LIB. Moreover, the incorporation of coinage metals enables SERS to take place specifically on the surface of silicon. Consequently, by using a specially designed Raman cell, it is possible to follow the processes in a silicon-coinage metal-based battery trough operando SERS measurements.
Lu, Gang; Sun, Haipeng; She, Pengxiang; Youn, Ji-Youn; Warburton, Sarah; Ping, Peipei; Vondriska, Thomas M; Cai, Hua; Lynch, Christopher J; Wang, Yibin
2009-06-01
The branched-chain amino acids (BCAA) are essential amino acids required for protein homeostasis, energy balance, and nutrient signaling. In individuals with deficiencies in BCAA, these amino acids can be preserved through inhibition of the branched-chain-alpha-ketoacid dehydrogenase (BCKD) complex, the rate-limiting step in their metabolism. BCKD is inhibited by phosphorylation of its E1alpha subunit at Ser293, which is catalyzed by BCKD kinase. During BCAA excess, phosphorylated Ser293 (pSer293) becomes dephosphorylated through the concerted inhibition of BCKD kinase and the activity of an unknown intramitochondrial phosphatase. Using unbiased, proteomic approaches, we have found that a mitochondrial-targeted phosphatase, PP2Cm, specifically binds the BCKD complex and induces dephosphorylation of Ser293 in the presence of BCKD substrates. Loss of PP2Cm completely abolished substrate-induced E1alpha dephosphorylation both in vitro and in vivo. PP2Cm-deficient mice exhibited BCAA catabolic defects and a metabolic phenotype similar to the intermittent or intermediate types of human maple syrup urine disease (MSUD), a hereditary disorder caused by defects in BCKD activity. These results indicate that PP2Cm is the endogenous BCKD phosphatase required for nutrient-mediated regulation of BCKD activity and suggest that defects in PP2Cm may be responsible for a subset of human MSUD.
SERS and DFT study of water on metal cathodes of silver, gold and platinum nanoparticles.
Li, Jian-Feng; Huang, Yi-Fan; Duan, Sai; Pang, Ran; Wu, De-Yin; Ren, Bin; Xu, Xin; Tian, Zhong-Qun
2010-03-14
The observed surface-enhanced Raman scattering (SERS) spectra of water adsorbed on metal film electrodes of silver, gold, and platinum nanoparticles were used to infer interfacial water structures on the basis of the change of the electrochemical vibrational Stark tuning rates and the relative Raman intensity of the stretching and bending modes. To explain the increase of the relative Raman intensity ratio of the bending and stretching vibrations at the very negative potential region, density functional theory calculations provide the conceptual model. The specific enhancement effect for the bending mode was closely associated with the water adsorption structure in a hydrogen bonded configuration through its H-end binding to surface sites with large polarizability due to strong cathodic polarization. The present results allow us to propose that interfacial water molecules exist on these metal cathodes with different hydrogen bonding interactions, i.e., the HO-HH-Pt dihydrogen bond for platinum and the HO-HAg(Au) for silver and gold. This dihydrogen bonding configuration on platinum is further supported from observation of the Pt-H stretching band. Furthermore, the influences of the pH effect on SERS intensity and vibrational Stark effect on the gold electrode indicate that the O-H stretching SERS signals are enhanced in the alkaline solutions because of the hydrated hydroxide surface species adsorbed on the gold cathode.
NASA Astrophysics Data System (ADS)
Mikac, L.; Jurkin, T.; Štefanić, G.; Ivanda, Mile; Gotić, Marijan
2017-09-01
The silver nanoparticles (AgNPs) were synthesized upon γ-irradiation of AgNO3 precursor suspensions in the presence of diethylaminoethyl-dextran hydrochloride (DEAE-dextran) cationic polymer as a stabilizer. The dose rate of γ-irradiation was 32 kGy h-1, and absorbed doses were 30 and 60 kGy. The γ-irradiation of the precursor suspension at acidic or neutral pH conditions produced predominantly the silver(I) chloride (AgCl) particles, because of the poor solubility of AgCl already present in the precursor suspension. The origin of AgCl in the precursor suspension was due to the presence of chloride ions in DEAE-dextran hydrochloride polymer. The addition of ammonia to the precursor suspension dissolved the AgCl precipitate, and the γ-irradiation of such colourless suspension at alkali pH produced a stable aqueous suspension with rather uniform spherical AgNPs of approximately 30 nm in size. The size of AgNPs was controlled by varying the AgNO3/DEAE-dextran concentration in the suspensions. The surface-enhanced Raman scattering (SERS) activities of synthesized AgNPs were examined using organic molecules rhodamine 6G, pyridine and 4-mercaptobenzoic acid (4-MBA). The NaBH4 was used as SERS aggregation agent. The SERS results have shown that in the presence of synthesized AgNPs, it was possible to detect low concentration of tested compounds.
Impact of Measurement Error on Synchrophasor Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less
Curilem Gatica, Cristian; Almagià Flores, Atilio; Rodríguez Rodríguez, Fernando; Yuing Farias, Tuillang; Berral de la Rosa, Francisco; Martínez Salazar, Cristian; Jorquera Aguilera, Carlos; Bahamondes Ávila, Carlos; Soís Urra, Patricio; Cristi Montero, Carlos; Bruneau Chávez, José; Pinto Aguilante, Juan; Niedmann Brunet, Luis
2016-06-30
El índice de masa corporal (IMC) otorga uno de los índices más usados para determinar el estado nutricional de la población a nivel mundial, donde a pesar de existir recomendaciones claras y definidas para su interpretación como el sexo, edad, raza, entre otros, normalmente se estandariza su clasificación, independiente de las variables, aumentando el error en el resultado y en la clasificación del estado nutricional.El uso de la composición corporal a través de la antropometría entrega mayor información que el IMC, siendo la masa grasa y la masa muscular los principales resultados útiles.Este artículo presenta una revisión de las ecuaciones existentes y propone aquellas más simples y con menor error de estimación para ser usadas como una herramienta que reemplace o complemente al IMC, favoreciendo una mejor comprensión e interpretación del estado nutricional y nivelde actividad física en niños y adolescentes.
Claspin Promotes Normal Replication Fork Rates in Human Cells
Helleday, Thomas; Caldecott, Keith W.
2008-01-01
The S phase-specific adaptor protein Claspin mediates the checkpoint response to replication stress by facilitating phosphorylation of Chk1 by ataxia-telangiectasia and Rad3-related (ATR). Evidence suggests that these components of the ATR pathway also play a critical role during physiological S phase. Chk1 is required for high rates of global replication fork progression, and Claspin interacts with the replication machinery and might therefore monitor normal DNA replication. Here, we have used DNA fiber labeling to investigate, for the first time, whether human Claspin is required for high rates of replication fork progression during normal S phase. We report that Claspin-depleted HeLa and HCT116 cells display levels of replication fork slowing similar to those observed in Chk1-depleted cells. This was also true in primary human 1BR3 fibroblasts, albeit to a lesser extent, suggesting that Claspin is a universal requirement for high replication fork rates in human cells. Interestingly, Claspin-depleted cells retained significant levels of Chk1 phosphorylation at both Ser317 and Ser345, raising the possibility that Claspin function during normal fork progression may extend beyond facilitating phosphorylation of either individual residue. Consistent with this possibility, depletion of Chk1 and Claspin together doubled the percentage of very slow forks, compared with depletion of either protein alone. PMID:18353973
Refractive errors in medical students in Singapore.
Woo, W W; Lim, K A; Yang, H; Lim, X Y; Liew, F; Lee, Y S; Saw, S M
2004-10-01
Refractive errors are becoming more of a problem in many societies, with prevalence rates of myopia in many Asian urban countries reaching epidemic proportions. This study aims to determine the prevalence rates of various refractive errors in Singapore medical students. 157 second year medical students (aged 19-23 years) in Singapore were examined. Refractive error measurements were determined using a stand-alone autorefractor. Additional demographical data was obtained via questionnaires filled in by the students. The prevalence rate of myopia in Singapore medical students was 89.8 percent (Spherical equivalence (SE) at least -0.50 D). Hyperopia was present in 1.3 percent (SE more than +0.50 D) of the participants and the overall astigmatism prevalence rate was 82.2 percent (Cylinder at least 0.50 D). Prevalence rates of myopia and astigmatism in second year Singapore medical students are one of the highest in the world.
Association of hOGG1 Ser326Cys polymorphism with gastric cancer risk: a meta-analysis.
Niu, Yanyang; Li, Fang; Tang, Bo; Shi, Yan; Yu, Peiwu
2012-06-01
Studies investigating the association between human 8-oxoguanine glycosylase 1(hOGG1) Ser326Cys polymorphism and gastric cancer (GC) risk have reported conflicting results. We performed a meta-analysis of published case-control studies to better compare results between studies. 11 eligible studies with 2,180 GC cases and 3,985 controls were selected. There were 5 studies involving Caucasians and 5 studies involving Asians. The combined result based on all studies did not show significant difference in any genetics models. Ser/Cys + Cys/Cys versus Ser/Ser (OR = 0.91, 95% CI 0.81-1.03), Cys/Cys versus Ser/Cys + Ser/Ser (OR = 1.07, 95% CI 0.80-1.44), Ser/Cys versus Ser/Ser (OR = 0.91, 95% CI 0.80-1.03), Sys/Cys versus Ser/Cys (OR = 1.10, 95% CI 0.83-1.47), Cys/Cys versus Ser/Ser (OR = 0.99, 95% CI 0.74-1.34), Cys versus Ser (OR = 1.01, 95% CI 0.88-1.17).When stratifying for ethnicity, there was still no significant association found between hOGG1 Ser326Cys polymorphism and GC risk. Funnel plot and Egger’s test showed some evidence of publication bias on the basis of all studies. Two studies were the main reason because their samples were too small. However, the result of sensitivity analysis suggested that the influence of these two studies and one mixed population study on the pooled OR was weak. Our result could explain the association between hOGG1 Ser326Cys polymorphism and GC risk. In conclusion, we did not found the evidence that the Cys allele at codon 326 of hOGG1 could increase GC risk in our analysis.
Social deviance activates the brain's error-monitoring system.
Kim, Bo-Rin; Liss, Alison; Rao, Monica; Singer, Zachary; Compton, Rebecca J
2012-03-01
Social psychologists have long noted the tendency for human behavior to conform to social group norms. This study examined whether feedback indicating that participants had deviated from group norms would elicit a neural signal previously shown to be elicited by errors and monetary losses. While electroencephalograms were recorded, participants (N = 30) rated the attractiveness of 120 faces and received feedback giving the purported average rating made by a group of peers. The feedback was manipulated so that group ratings either were the same as a participant's rating or deviated by 1, 2, or 3 points. Feedback indicating deviance from the group norm elicited a feedback-related negativity, a brainwave signal known to be elicited by objective performance errors and losses. The results imply that the brain treats deviance from social norms as an error.
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
Automatic learning rate adjustment for self-supervising autonomous robot control
NASA Technical Reports Server (NTRS)
Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.
1992-01-01
Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.
Alvarez-Martin, Pablo; O'Connell Motherway, Mary; Turroni, Francesca; Foroni, Elena; Ventura, Marco; van Sinderen, Douwe
2012-10-01
This work reports on the identification and molecular characterization of a two-component regulatory system (2CRS), encoded by serRK, which is believed to control the expression of the ser(2003) locus in Bifidobacterium breve UCC2003. The ser(2003) locus consists of two genes, Bbr_1319 (sagA) and Bbr_1320 (serU), which are predicted to encode a hypothetical membrane-associated protein and a serpin-like protein, respectively. The response regulator SerR was shown to bind to the promoter region of ser(2003), and the probable recognition sequence of SerR was determined by a combinatorial approach of in vitro site-directed mutagenesis coupled to transcriptional fusion and electrophoretic mobility shift assays (EMSAs). The importance of the serRK 2CRS in the response of B. breve to protease-mediated induction was confirmed by generating a B. breve serR insertion mutant, which was shown to exhibit altered ser(2003) transcriptional induction patterns compared to the parent strain, UCC2003. Interestingly, the analysis of a B. breve serU mutant revealed that the SerRK signaling pathway appears to include a SerU-dependent autoregulatory loop.
Alvarez-Martin, Pablo; O'Connell Motherway, Mary; Turroni, Francesca; Foroni, Elena; Ventura, Marco
2012-01-01
This work reports on the identification and molecular characterization of a two-component regulatory system (2CRS), encoded by serRK, which is believed to control the expression of the ser2003 locus in Bifidobacterium breve UCC2003. The ser2003 locus consists of two genes, Bbr_1319 (sagA) and Bbr_1320 (serU), which are predicted to encode a hypothetical membrane-associated protein and a serpin-like protein, respectively. The response regulator SerR was shown to bind to the promoter region of ser2003, and the probable recognition sequence of SerR was determined by a combinatorial approach of in vitro site-directed mutagenesis coupled to transcriptional fusion and electrophoretic mobility shift assays (EMSAs). The importance of the serRK 2CRS in the response of B. breve to protease-mediated induction was confirmed by generating a B. breve serR insertion mutant, which was shown to exhibit altered ser2003 transcriptional induction patterns compared to the parent strain, UCC2003. Interestingly, the analysis of a B. breve serU mutant revealed that the SerRK signaling pathway appears to include a SerU-dependent autoregulatory loop. PMID:22843530
An Automated Method to Generate e-Learning Quizzes from Online Language Learner Writing
ERIC Educational Resources Information Center
Flanagan, Brendan; Yin, Chengjiu; Hirokawa, Sachio; Hashimoto, Kiyota; Tabata, Yoshiyuki
2013-01-01
In this paper, the entries of Lang-8, which is a Social Networking Site (SNS) site for learning and practicing foreign languages, were analyzed and found to contain similar rates of errors for most error categories reported in previous research. These similarly rated errors were then processed using an algorithm to determine corrections suggested…
Code of Federal Regulations, 2012 CFR
2012-10-01
..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...
NASA Astrophysics Data System (ADS)
Han, Sungyub; Locke, Andrea K.; Oaks, Luke A.; Cheng, Yi-Shing Lisa; Coté, Gerard L.
2018-02-01
It is estimated that the number of new cases of oral cancers worldwide is 529,000 and more than 300,000 deaths each year. The five-year survival rate remains about 50%, and the low survival rate is believed to be due to delayed detection. The primary detection method is through a comprehensive clinical examination by a dentist followed by a biopsy of suspicious lesions. Systematic review and meta-analysis have revealed that clinical examination alone may not be sufficient to cause the clinician to perform a biopsy or refer for biopsy for early detection of OSCC. Therefore, a non-invasive, point-of-Care (POC) detection with high sensitivity and specificity for early detection would be urgently needed, and using salivary biomarkers would be an ideal technology for it. S100 calcium binding protein P (S100P) mRNA presenting in saliva is a potential biomarker for detection of oral cancer. Further, surface enhanced Raman spectroscopy (SERS) has been shown to be a promising POC diagnostic technique. In this research, a SERS-based assay using oligonucleotide strains was developed for the sensitive and rapid detection of S100P. Gold nanoparticles (AuNPs) as a SERS substrate were used for the conjugation with one of two unique 24 base pair oligonucleotides, referred to as left and right DNA probes. A Raman reporter molecule, malachite green isothiocyanate (MGITC), was bound to left-probe-conjugated AuNPs. UV-vis spectroscopy was employed to monitor the conjugation of DNA probes to AuNPs. The hybridization of S100P target to DNA-conjugated AuNPs in sandwich-assay format was confirmed by Raman spectroscopy and shown to yield and R2 of 0.917 across the range of 0-200 nM and a limit of detection of 3 nM.
Comparative architecture of silks, fibrous proteins and their encoding genes in insects and spiders.
Craig, Catherine L; Riekel, Christian
2002-12-01
The known silk fibroins and fibrous glues are thought to be encoded by members of the same gene family. All silk fibroins sequenced to date contain regions of long-range order (crystalline regions) and/or short-range order (non-crystalline regions). All of the sequenced fibroin silks (Flag or silk from flagelliform gland in spiders; Fhc or heavy chain fibroin silks produced by Lepidoptera larvae) are made up of hierarchically organized, repetitive arrays of amino acids. Fhc fibroin genes are characterized by a similar molecular genetic architecture of two exons and one intron, but the organization and size of these units differs. The Flag, Ser (sericin gene) and BR (Balbiani ring genes; both fibrous proteins) genes are made up of multiple exons and introns. Sequences coding for crystalline and non-crystalline protein domains are integrated in the repetitive regions of Fhc and MA exons, but not in the protein glues Ser1 and BR-1. Genetic 'hot-spots' promote recombination errors in Fhc, MA, and Flag. Codon bias, structural constraint, point mutations, and shortened coding arrays may be alternative means of stabilizing precursor mRNA transcripts. Differential regulation of gene expression and selective splicing of the mRNA transcript may allow rapid adaptation of silk functional properties to different physical environments.
d-Glyceric aciduria does not cause nonketotic hyperglycinemia: A historic co-occurrence.
Swanson, Michael A; Garcia, Stephanie M; Spector, Elaine; Kronquist, Kathryn; Creadon-Swindell, Geralyn; Walter, Melanie; Christensen, Ernst; Van Hove, Johan L K; Sass, Jörn Oliver
2017-06-01
Historically, d-glyceric aciduria was thought to cause an uncharacterized blockage to the glycine cleavage enzyme system (GCS) causing nonketotic hyperglycinemia (NKH) as a secondary phenomenon. This inference was reached based on the clinical and biochemical results from the first d-glyceric aciduria patient reported in 1974. Along with elevated glyceric acid excretion, this patient exhibited severe neurological symptoms of myoclonic epilepsy and absent development, and had elevated glycine levels and decreased glycine cleavage system enzyme activity. Mutations in the GLYCTK gene (encoding d-glycerate kinase) causing glyceric aciduria were previously noted. Since glycine changes were not observed in almost all of the subsequently reported cases of d-glyceric aciduria, this theory of NKH as a secondary syndrome of d-glyceric aciduria was revisited in this work. We showed that this historic patient harbored a homozygous missense mutation in AMT c.350C>T, p.Ser117Leu, and enzymatic assay of the expressed mutation confirmed the pathogeneity of the p.Ser117Leu mutation. We conclude that the original d-glyceric aciduria patient also had classic NKH and that this co-occurrence of two inborn errors of metabolism explains the original presentation. We conclude that no evidence remains that d-glyceric aciduria would cause NKH. Copyright © 2017 Elsevier Inc. All rights reserved.
DNA Barcoding through Quaternary LDPC Codes
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
Human operator response to error-likely situations in complex engineering systems
NASA Technical Reports Server (NTRS)
Morris, Nancy M.; Rouse, William B.
1988-01-01
The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Mapping DNA polymerase errors by single-molecule sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David F.; Lu, Jenny; Chang, Seungwoo
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
Mapping DNA polymerase errors by single-molecule sequencing
Lee, David F.; Lu, Jenny; Chang, Seungwoo; ...
2016-05-16
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.
Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean
2017-06-01
Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.
The assessment of cognitive errors using an observer-rated method.
Drapeau, Martin
2014-01-01
Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Advanced surface-enhanced Raman gene probe systems and methods thereof
Vo-Dinh, Tuan
2001-01-01
The subject invention is a series of methods and systems for using the Surface-Enhanced Raman (SER)-labeled Gene Probe for hybridization, detection and identification of SER-labeled hybridized target oligonucleotide material comprising the steps of immobilizing SER-labeled hybridized target oligonucleotide material on a support means, wherein the SER-labeled hybridized target oligonucleotide material comprise a SER label attached either to a target oligonucleotide of unknown sequence or to a gene probe of known sequence complementary to the target oligonucleotide sequence, the SER label is unique for the target oligonucleotide strands of a particular sequence wherein the SER-labeled oligonucleotide is hybridized to its complementary oligonucleotide strand, then the support means having the SER-labeled hybridized target oligonucleotide material adsorbed thereon is SERS activated with a SERS activating means, then the support means is analyzed.
Diagnostic pitfalls in sporadic transthyretin familial amyloid polyneuropathy (TTR-FAP).
Planté-Bordeneuve, V; Ferreira, A; Lalu, T; Zaros, C; Lacroix, C; Adams, D; Said, G
2007-08-14
Transthyretin familial amyloid polyneuropathies (TTR-FAPs) are autosomal dominant neuropathies of fatal outcome within 10 years after inaugural symptoms. Late diagnosis in patients who present as nonfamilial cases delays adequate management and genetic counseling. Clinical data of the 90 patients who presented as nonfamilial cases of the 300 patients of our cohort of patients with TTR-FAP were reviewed. They were 21 women and 69 men with a mean age at onset of 61 (extremes: 38 to 78 years) and 17 different mutations of the TTR gene including Val30Met (38 cases), Ser77Tyr (16 cases), Ile107Val (15 cases), and Ser77Phe (5 cases). Initial manifestations included mainly limb paresthesias (49 patients) or pain (17 patients). Walking difficulty and weakness (five patients) and cardiac or gastrointestinal manifestations (five patients), were less common at onset. Mean interval to diagnosis was 4 years (range 1 to 10 years); 18 cases were mistaken for chronic inflammatory demyelinating polyneuropathy, which was the most common diagnostic error. At referral a length-dependent sensory loss affected the lower limbs in 2, all four limbs in 20, and four limbs and anterior trunk in 77 patients. All sensations were affected in 60 patients (67%), while small fiber dysfunction predominated in the others. Severe dysautonomia affected 80 patients (90%), with postural hypotension in 52, gastrointestinal dysfunction in 50, impotence in 58 of 69 men, and sphincter disturbance in 31. Twelve patients required a cardiac pacemaker. Nerve biopsy was diagnostic in 54 of 65 patients and salivary gland biopsy in 20 of 30. Decreased nerve conduction velocity, increased CSF protein, negative biopsy findings, and false immunolabeling of amyloid deposits were the main causes of diagnostic errors. We conclude that DNA testing, which is the most reliable test for TTR-FAP, should be performed in patients with a progressive length-dependent small fiber polyneuropathy of unknown origin, especially when associated with autonomic dysfunction.
Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.
Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda
2017-09-01
We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.
Decoy-state quantum key distribution with more than three types of photon intensity pulses
NASA Astrophysics Data System (ADS)
Chau, H. F.
2018-04-01
The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.
Using endemic road features to create self-explaining roads and reduce vehicle speeds.
Charlton, Samuel G; Mackie, Hamish W; Baas, Peter H; Hay, Karen; Menezes, Miguel; Dixon, Claire
2010-11-01
This paper describes a project undertaken to establish a self-explaining roads (SER) design programme on existing streets in an urban area. The methodology focussed on developing a process to identify functional road categories and designs based on endemic road characteristics taken from functional exemplars in the study area. The study area was divided into two sections, one to receive SER treatments designed to maximise visual differences between road categories, and a matched control area to remain untreated for purposes of comparison. The SER design for local roads included increased landscaping and community islands to limit forward visibility, and removal of road markings to create a visually distinct road environment. In comparison, roads categorised as collectors received increased delineation, addition of cycle lanes, and improved amenity for pedestrians. Speed data collected 3 months after implementation showed a significant reduction in vehicle speeds on local roads and increased homogeneity of speeds on both local and collector roads. The objective speed data, combined with residents' speed choice ratings, indicated that the project was successful in creating two discriminably different road categories. 2010 Elsevier Ltd. All rights reserved.
Realizing Serine/Threonine Ligation: Scope and Limitations and Mechanistic Implication Thereof
NASA Astrophysics Data System (ADS)
Wong, Clarence; Li, Tianlu; Lam, Hiu Yung; Zhang, Yinfeng; LI, Xuechen
2014-05-01
Serine/Threonine ligation (STL) has emerged as an alternative tool for protein chemical synthesis, bioconjugations as well as macrocyclization of peptides of various sizes. Owning to the high abundance of Ser/Thr residues in natural peptides and proteins, STL is expected to find a wide range of applications in chemical biology research. Herein, we have fully investigated the compatibility of the serine/threonine ligation strategy for X-Ser/Thr ligation sites, where X is any of the 20 naturally occurring amino acids. Our studies have shown that 17 amino acids are suitable for ligation, while Asp, Glu, and Lys are not compatible. Among the working 17 C-terminal amino acids, the retarded reaction resulted from the bulky β-branched amino acid (Thr, Val and Ile) is not seen under the current ligation condition. We have also investigated the chemoselectivity involving the amino group of the internal lysine which may compete with the N-terminal Ser/Thr for reaction with the C-terminal salicylaldehyde (SAL) ester aldehyde group. The result suggested that the free internal amino group does not adversely slow down the ligation rate.
Families as Partners in Hospital Error and Adverse Event Surveillance
Khan, Alisa; Coffey, Maitreya; Litterer, Katherine P.; Baird, Jennifer D.; Furtak, Stephannie L.; Garcia, Briana M.; Ashland, Michele A.; Calaman, Sharon; Kuzma, Nicholas C.; O’Toole, Jennifer K.; Patel, Aarti; Rosenbluth, Glenn; Destino, Lauren A.; Everhart, Jennifer L.; Good, Brian P.; Hepps, Jennifer H.; Dalal, Anuj K.; Lipsitz, Stuart R.; Yoon, Catherine S.; Zigmont, Katherine R.; Srivastava, Rajendu; Starmer, Amy J.; Sectish, Theodore C.; Spector, Nancy D.; West, Daniel C.; Landrigan, Christopher P.
2017-01-01
IMPORTANCE Medical errors and adverse events (AEs) are common among hospitalized children. While clinician reports are the foundation of operational hospital safety surveillance and a key component of multifaceted research surveillance, patient and family reports are not routinely gathered. We hypothesized that a novel family-reporting mechanism would improve incident detection. OBJECTIVE To compare error and AE rates (1) gathered systematically with vs without family reporting, (2) reported by families vs clinicians, and (3) reported by families vs hospital incident reports. DESIGN, SETTING, AND PARTICIPANTS We conducted a prospective cohort study including the parents/caregivers of 989 hospitalized patients 17 years and younger (total 3902 patient-days) and their clinicians from December 2014 to July 2015 in 4 US pediatric centers. Clinician abstractors identified potential errors and AEs by reviewing medical records, hospital incident reports, and clinician reports as well as weekly and discharge Family Safety Interviews (FSIs). Two physicians reviewed and independently categorized all incidents, rating severity and preventability (agreement, 68%–90%; κ, 0.50–0.68). Discordant categorizations were reconciled. Rates were generated using Poisson regression estimated via generalized estimating equations to account for repeated measures on the same patient. MAIN OUTCOMES AND MEASURES Error and AE rates. RESULTS Overall, 746 parents/caregivers consented for the study. Of these, 717 completed FSIs. Their median (interquartile range) age was 32.5 (26–40) years; 380 (53.0%) were nonwhite, 566 (78.9%) were female, 603 (84.1%) were English speaking, and 380 (53.0%) had attended college. Of 717 parents/caregivers completing FSIs, 185 (25.8%) reported a total of 255 incidents, which were classified as 132 safety concerns (51.8%), 102 nonsafety-related quality concerns (40.0%), and 21 other concerns (8.2%). These included 22 preventable AEs (8.6%), 17 nonharmful medical errors (6.7%), and 11 nonpreventable AEs (4.3%) on the study unit. In total, 179 errors and 113 AEs were identified from all sources. Family reports included 8 otherwise unidentified AEs, including 7 preventable AEs. Error rates with family reporting (45.9 per 1000 patient-days) were 1.2-fold (95%CI, 1.1–1.2) higher than rates without family reporting (39.7 per 1000 patient-days). Adverse event rates with family reporting (28.7 per 1000 patient-days) were 1.1-fold (95%CI, 1.0–1.2; P=.006) higher than rates without (26.1 per 1000 patient-days). Families and clinicians reported similar rates of errors (10.0 vs 12.8 per 1000 patient-days; relative rate, 0.8; 95%CI, .5–1.2) and AEs (8.5 vs 6.2 per 1000 patient-days; relative rate, 1.4; 95%CI, 0.8–2.2). Family-reported error rates were 5.0-fold (95%CI, 1.9–13.0) higher and AE rates 2.9-fold (95% CI, 1.2–6.7) higher than hospital incident report rates. CONCLUSIONS AND RELEVANCE Families provide unique information about hospital safety and should be included in hospital safety surveillance in order to facilitate better design and assessment of interventions to improve safety. PMID:28241211
Star tracker error analysis: Roll-to-pitch nonorthogonality
NASA Technical Reports Server (NTRS)
Corson, R. W.
1979-01-01
An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.
Based on time and spatial-resolved SERS mapping strategies for detection of pesticides.
Ma, Bingbing; Li, Pan; Yang, Liangbao; Liu, Jinhuai
2015-08-15
For the sensitive and convenient detection of pesticides, several sensing methods and materials have been widely explored. However, it is still a challenge to obtain sensitive, simple detection techniques for pesticides. Here, the simple and sensitive Time-resolved SERS mapping (T-SERS) and Spatial-resolved SERS mapping (S-SERS) are presented for detection of pesticides by using Au@Ag NPs as SERS substrate. The Time-resolved SERS mapping (T-SERS) is based on state translation nanoparticles from the wet state to the dry state to realize SERS measurements. During the SERS measurement, adhesive force drives the particles closer together and then average interparticle gap becomes smaller. Following, air then begins to intersperse into the liquid network and the particles are held together by adhesive forces at the solid-liquid-air interface. In the late stage of water evaporation, all particles are uniformly distributed. Thus, so called hotspots matrix that can hold hotspots between every two adjacent particles in efficient space with minimal polydispersity of particle size are achieved, accompanying the red-shift of surface plasmon peak and appearance of an optimal SPR resonated sharply with excitation wavelength. Here, we found that the T-SERS method exhibits the detection limits of 1-2 orders of magnitude higher than that of S-SERS. On the other hand, the T-SERS is very simple method with high detection sensitivity, better reproducibility (RSD=10.8%) and is beneficial to construction of a calibration curve in comparison with that of Spatial-resolved SERS mapping (S-SERS). Most importantly, as a result of its remarkable sensitivity, T-SERS mapping strategies have been applied to detection of several pesticides and the detect limit can down to 1nM for paraoxon, 0.5nM for sumithion. In short, T-SERS mapping measurement promises to open a market for SERS practical detection with prominent advantages. Copyright © 2015. Published by Elsevier B.V.
Failure analysis and modeling of a VAXcluster system
NASA Technical Reports Server (NTRS)
Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.
1990-01-01
This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.
Error monitoring issues for common channel signaling
NASA Astrophysics Data System (ADS)
Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.
1994-04-01
Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.
Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D
2008-07-01
English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.
Antidepressant and antipsychotic medication errors reported to United States poison control centers.
Kamboj, Alisha; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A
2018-05-08
To investigate unintentional therapeutic medication errors associated with antidepressant and antipsychotic medications in the United States and expand current knowledge on the types of errors commonly associated with these medications. A retrospective analysis of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications was conducted using data from the National Poison Data System. From 2000 to 2012, poison control centers received 207 670 calls reporting unintentional therapeutic errors associated with antidepressant or antipsychotic medications that occurred outside of a health care facility, averaging 15 975 errors annually. The rate of antidepressant-related errors increased by 50.6% from 2000 to 2004, decreased by 6.5% from 2004 to 2006, and then increased 13.0% from 2006 to 2012. The rate of errors related to antipsychotic medications increased by 99.7% from 2000 to 2004 and then increased by 8.8% from 2004 to 2012. Overall, 70.1% of reported errors occurred among adults, and 59.3% were among females. The medications most frequently associated with errors were selective serotonin reuptake inhibitors (30.3%), atypical antipsychotics (24.1%), and other types of antidepressants (21.5%). Most medication errors took place when an individual inadvertently took or was given a medication twice (41.0%), inadvertently took someone else's medication (15.6%), or took the wrong medication (15.6%). This study provides a comprehensive overview of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications. The frequency and rate of these errors increased significantly from 2000 to 2012. Given that use of these medications is increasing in the US, this study provides important information about the epidemiology of the associated medication errors. Copyright © 2018 John Wiley & Sons, Ltd.
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672
Prediction of pilot reserve attention capacity during air-to-air target tracking
NASA Technical Reports Server (NTRS)
Onstott, E. D.; Faulkner, W. H.
1977-01-01
Reserve attention capacity of a pilot was calculated using a pilot model that allocates exclusive model attention according to the ranking of task urgency functions whose variables are tracking error and error rate. The modeled task consisted of tracking a maneuvering target aircraft both vertically and horizontally, and when possible, performing a diverting side task which was simulated by the precise positioning of an electrical stylus and modeled as a task of constant urgency in the attention allocation algorithm. The urgency of the single loop vertical task is simply the magnitude of the vertical tracking error, while the multiloop horizontal task requires a nonlinear urgency measure of error and error rate terms. Comparison of model results with flight simulation data verified the computed model statistics of tracking error of both axes, lateral and longitudinal stick amplitude and rate, and side task episodes. Full data for the simulation tracking statistics as well as the explicit equations and structure of the urgency function multiaxis pilot model are presented.
The Effects of Non-Normality on Type III Error for Comparing Independent Means
ERIC Educational Resources Information Center
Mendes, Mehmet
2007-01-01
The major objective of this study was to investigate the effects of non-normality on Type III error rates for ANOVA F its three commonly recommended parametric counterparts namely Welch, Brown-Forsythe, and Alexander-Govern test. Therefore these tests were compared in terms of Type III error rates across the variety of population distributions,…
NASA Astrophysics Data System (ADS)
Bezan, Scott; Shirani, Shahram
2006-12-01
To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.
Errors in fluid therapy in medical wards.
Mousavi, Maryam; Khalili, Hossein; Dashti-Khavidaki, Simin
2012-04-01
Intravenous fluid therapy remains an essential part of patients' care during hospitalization. There are only few studies that focused on fluid therapy in the hospitalized patients, and there is not any consensus statement about fluid therapy in patients who are hospitalized in medical wards. The aim of the present study was to assess intravenous fluid therapy status and related errors in the patients during the course of hospitalization in the infectious diseases wards of a referral teaching hospital. This study was conducted in the infectious diseases wards of Imam Khomeini Complex Hospital, Tehran, Iran. During a retrospective study, data related to intravenous fluid therapy were collected by two clinical pharmacists of infectious diseases from 2008 to 2010. Intravenous fluid therapy information including indication, type, volume and rate of fluid administration was recorded for each patient. An internal protocol for intravenous fluid therapy was designed based on literature review and available recommendations. The data related to patients' fluid therapy were compared with this protocol. The fluid therapy was considered appropriate if it was compatible with the protocol regarding indication of intravenous fluid therapy, type, electrolyte content and rate of fluid administration. Any mistake in the selection of fluid type, content, volume and rate of administration was considered as intravenous fluid therapy errors. Five hundred and ninety-six of medication errors were detected during the study period in the patients. Overall rate of fluid therapy errors was 1.3 numbers per patient during hospitalization. Errors in the rate of fluid administration (29.8%), incorrect fluid volume calculation (26.5%) and incorrect type of fluid selection (24.6%) were the most common types of errors. The patients' male sex, old age, baseline renal diseases, diabetes co-morbidity, and hospitalization due to endocarditis, HIV infection and sepsis are predisposing factors for the occurrence of fluid therapy errors in the patients. Our result showed that intravenous fluid therapy errors occurred commonly in the hospitalized patients especially in the medical wards. Improvement in knowledge and attention of health-care workers about these errors are essential for preventing of medication errors in aspect of fluid therapy.
Bayes Error Rate Estimation Using Classifier Ensembles
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2003-01-01
The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.
Wu, Zhijin; Liu, Dongmei; Sui, Yunxia
2008-02-01
The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.
Outpatient Prescribing Errors and the Impact of Computerized Prescribing
Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W
2005-01-01
Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Anthony, E-mail: anthony.arnold@sesiahs.health.nsw.gov.a; Delaney, Geoff P.; Cassapi, Lynette
Purpose: Radiotherapy is a common treatment for cancer patients. Although incidence of error is low, errors can be severe or affect significant numbers of patients. In addition, errors will often not manifest until long periods after treatment. This study describes the development of an incident reporting tool that allows categorical analysis and time trend reporting, covering first 3 years of use. Methods and Materials: A radiotherapy-specific incident analysis system was established. Staff members were encouraged to report actual errors and near-miss events detected at prescription, simulation, planning, or treatment phases of radiotherapy delivery. Trend reporting was reviewed monthly. Results: Reportsmore » were analyzed for the first 3 years of operation (May 2004-2007). A total of 688 reports was received during the study period. The actual error rate was 0.2% per treatment episode. During the study period, the actual error rates reduced significantly from 1% per year to 0.3% per year (p < 0.001), as did the total event report rates (p < 0.0001). There were 3.5 times as many near misses reported compared with actual errors. Conclusions: This system has allowed real-time analysis of events within a radiation oncology department to a reduced error rate through focus on learning and prevention from the near-miss reports. Plans are underway to develop this reporting tool for Australia and New Zealand.« less
Syndromic surveillance for health information system failures: a feasibility study.
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-05-01
To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.
Huo, Xiang; Hu, Zhibin; Zhai, Xiangjun; Wang, Yan; Wang, Shui; Wang, Xuechen; Qin, Jianwei; Chen, Wenseng; Jin, Guangfu; Liu, Jiyong; Gao, Jun; Wei, Qingyi; Wang, Xinru; Shen, Hongbing
2007-05-01
The BRCA1 Associated RING Domain (BARD1) gene has been identified as a high penetrance gene for breast cancer, whose germline and somatic mutations were reported in both non-BRCA1/2 hereditary site-specific and sporadic breast cancer cases. BARD1 plays a crucial role in tumor repression, along with its heterodimeric partner BRCA1. In the current study, we tested the hypothesis that common non-synonymous polymorphisms in BARD1 are associated with breast cancer susceptibility in a case-control study of 507 patients with incident breast cancer and 539 frequency-matched cancer-free controls in Chinese women. We genotyped all three common (minor allele frequency (MAF)>0.10) non-synonymous polymorphisms (Pro24Ser, Arg378Ser, and Val507Met) in BARD1. We found that the BARD1 Pro24Ser variant genotypes (24Pro/Ser and 24Ser/Ser) and Arg378Ser variant homozygote 378Ser/Ser were associated with a significantly decreased breast cancer risk, compared with their wild-type homozygotes, respectively. Furthermore, a significant locus-locus interaction was evident between Pro24Ser and Arg378Ser (P(int )= 0.032). Among the 378Ser variant allele carriers, the 24Pro/Pro wild-type homozygote was associated with a significantly increased breast cancer risk (adjusted OR=1.81, 95% CI=1.11-2.95), but the subjects having 24Pro/Ser or Ser/Ser variant genotypes had a significantly decreased risk (adjusted OR=0.74, 95% CI=0.56-0.99). In stratified analysis, this locus-locus interaction was more evident among subjects without family cancer history, those with positive estrogen receptor (ER) and individuals with negative progesterone receptor (PR). These findings indicate that the potentially functional polymorphisms Pro24Ser and Arg378Ser in BARD1 may jointly contribute to the susceptibility of breast cancer.
Vogel, Erin A.; Billups, Sarah J.; Herner, Sheryl J.
2016-01-01
Summary Objective The purpose of this study was to compare the effectiveness of an outpatient renal dose adjustment alert via a computerized provider order entry (CPOE) clinical decision support system (CDSS) versus a CDSS with alerts made to dispensing pharmacists. Methods This was a retrospective analysis of patients with renal impairment and 30 medications that are contraindicated or require dose-adjustment in such patients. The primary outcome was the rate of renal dosing errors for study medications that were dispensed between August and December 2013, when a pharmacist-based CDSS was in place, versus August through December 2014, when a prescriber-based CDSS was in place. A dosing error was defined as a prescription for one of the study medications dispensed to a patient where the medication was contraindicated or improperly dosed based on the patient’s renal function. The denominator was all prescriptions for the study medications dispensed during each respective study period. Results During the pharmacist- and prescriber-based CDSS study periods, 49,054 and 50,678 prescriptions, respectively, were dispensed for one of the included medications. Of these, 878 (1.8%) and 758 (1.5%) prescriptions were dispensed to patients with renal impairment in the respective study periods. Patients in each group were similar with respect to age, sex, and renal function stage. Overall, the five-month error rate was 0.38%. Error rates were similar between the two groups: 0.36% and 0.40% in the pharmacist- and prescriber-based CDSS, respectively (p=0.523). The medication with the highest error rate was dofetilide (0.51% overall) while the medications with the lowest error rate were dabigatran, fondaparinux, and spironolactone (0.00% overall). Conclusions Prescriber- and pharmacist-based CDSS provided comparable, low rates of potential medication errors. Future studies should be undertaken to examine patient benefits of the prescriber-based CDSS. PMID:27466041
Surface enhanced Raman gene probe and methods thereof
Vo-Dinh, T.
1998-09-29
The subject invention disclosed herein is a new gene probe biosensor and methods based on surface enhanced Raman scattering (SERS) label detection. The SER gene probe biosensor comprises a support means, a SER gene probe having at least one oligonucleotide strand labeled with at least one SERS label, and a SERS active substrate disposed on the support means and having at least one of the SER gene probes adsorbed thereon. Biotargets such as bacterial and viral DNA, RNA and PNA are detected using a SER gene probe via hybridization to oligonucleotide strands complementary to the SER gene probe. The support means supporting the SERS active substrate includes a fiberoptic probe, an array of fiberoptic probes for performance of multiple assays and a waveguide microsensor array with charge-coupled devices or photodiode arrays. 18 figs.
Surface enhanced Raman gene probe and methods thereof
Vo-Dinh, Tuan
1998-01-01
The subject invention disclosed herein is a new gene probe biosensor and methods thereof based on surface enhanced Raman scattering (SERS) label detection. The SER gene probe biosensor comprises a support means, a SER gene probe having at least one oligonucleotide strand labeled with at least one SERS label, and a SERS active substrate disposed on the support means and having at least one of the SER gene probes adsorbed thereon. Biotargets such as bacterial and viral DNA, RNA and PNA are detected using a SER gene probe via hybridization to oligonucleotide strands complementary to the SER gene probe. The support means supporting the SERS active substrate includes a fiberoptic probe, an array of fiberoptic probes for performance of multiple assays and a waveguide microsensor array with charge-coupled devices or photodiode arrays.
Surface enhanced Raman gene probe and methods thereof
Vo-Dinh, T.
1998-07-21
The subject invention disclosed is a new gene probe biosensor and methods based on surface enhanced Raman scattering (SERS) label detection. The SER gene probe biosensor comprises a support means, a SER gene probe having at least one oligonucleotide strand labeled with at least one SERS label, and a SERS active substrate disposed on the support means and having at least one of the SER gene probes adsorbed. Biotargets such as bacterial and viral DNA, RNA and PNA are detected using a SER gene probe via hybridization to oligonucleotide strands complementary to the SER gene probe. The support means supporting the SERS active substrate includes a fiberoptic probe, an array of fiberoptic probes for performance of multiple assays and a waveguide microsensor array with charge-coupled devices or photodiode arrays. 18 figs.
Publication bias was not a good reason to discourage trials with low power.
Borm, George F; den Heijer, Martin; Zielhuis, Gerhard A
2009-01-01
The objective was to investigate whether it is justified to discourage trials with less than 80% power. Trials with low power are unlikely to produce conclusive results, but their findings can be used by pooling then in a meta-analysis. However, such an analysis may be biased, because trials with low power are likely to have a nonsignificant result and are less likely to be published than trials with a statistically significant outcome. We simulated several series of studies with varying degrees of publication bias and then calculated the "real" one-sided type I error and the bias of meta-analyses with a "nominal" error rate (significance level) of 2.5%. In single trials, in which heterogeneity was set at zero, low, and high, the error rates were 2.3%, 4.7%, and 16.5%, respectively. In multiple trials with 80%-90% power and a publication rate of 90% when the results were nonsignificant, the error rates could be as high as 5.1%. When the power was 50% and the publication rate of non-significant results was 60%, the error rates did not exceed 5.3%, whereas the bias was at most 15% of the difference used in the power calculation. The impact of publication bias does not warrant the exclusion of trials with 50% power.
Flexible and mechanical strain resistant large area SERS active substrates
NASA Astrophysics Data System (ADS)
Singh, J. P.; Chu, Hsiaoyun; Abell, Justin; Tripp, Ralph A.; Zhao, Yiping
2012-05-01
We report a cost effective and facile way to synthesize flexible, uniform, and large area surface enhanced Raman scattering (SERS) substrates using an oblique angle deposition (OAD) technique. The flexible SERS substrates consist of 1 μm long, tilted silver nanocolumnar films deposited on flexible polydimethylsiloxane (PDMS) and polyethylene terephthalate (PET) sheets using OAD. The SERS enhancement activity of these flexible substrates was determined using 10-5 M trans-1,2-bis(4-pyridyl) ethylene (BPE) Raman probe molecules. The in situ SERS measurements on these flexible substrates under mechanical (tensile/bending) strain conditions were performed. Our results show that flexible SERS substrates can withstand a tensile strain (ε) value as high as 30% without losing SERS performance, whereas the similar bending strain decreases the SERS performance by about 13%. A cyclic tensile loading test on flexible PDMS SERS substrates at a pre-specified tensile strain (ε) value of 10% shows that the SERS intensity remains almost constant for more than 100 cycles. These disposable and flexible SERS substrates can be integrated with biological substances and offer a novel and practical method to facilitate biosensing applications.
Medication Errors in Vietnamese Hospitals: Prevalence, Potential Outcome and Associated Factors
Nguyen, Huong-Thao; Nguyen, Tuan-Dung; van den Heuvel, Edwin R.; Haaijer-Ruskamp, Flora M.; Taxis, Katja
2015-01-01
Background Evidence from developed countries showed that medication errors are common and harmful. Little is known about medication errors in resource-restricted settings, including Vietnam. Objectives To determine the prevalence and potential clinical outcome of medication preparation and administration errors, and to identify factors associated with errors. Methods This was a prospective study conducted on six wards in two urban public hospitals in Vietnam. Data of preparation and administration errors of oral and intravenous medications was collected by direct observation, 12 hours per day on 7 consecutive days, on each ward. Multivariable logistic regression was applied to identify factors contributing to errors. Results In total, 2060 out of 5271 doses had at least one error. The error rate was 39.1% (95% confidence interval 37.8%- 40.4%). Experts judged potential clinical outcomes as minor, moderate, and severe in 72 (1.4%), 1806 (34.2%) and 182 (3.5%) doses. Factors associated with errors were drug characteristics (administration route, complexity of preparation, drug class; all p values < 0.001), and administration time (drug round, p = 0.023; day of the week, p = 0.024). Several interactions between these factors were also significant. Nurse experience was not significant. Higher error rates were observed for intravenous medications involving complex preparation procedures and for anti-infective drugs. Slightly lower medication error rates were observed during afternoon rounds compared to other rounds. Conclusions Potentially clinically relevant errors occurred in more than a third of all medications in this large study conducted in a resource-restricted setting. Educational interventions, focusing on intravenous medications with complex preparation procedure, particularly antibiotics, are likely to improve patient safety. PMID:26383873
Salhi, Hussam E.; Hassel, Nathan C.; Siddiqui, Jalal K.; Brundage, Elizabeth A.; Ziolo, Mark T.; Janssen, Paul M. L.; Davis, Jonathan P.; Biesiadecki, Brandon J.
2016-01-01
Troponin I (TnI) is a major regulator of cardiac muscle contraction and relaxation. During physiological and pathological stress, TnI is differentially phosphorylated at multiple residues through different signaling pathways to match cardiac function to demand. The combination of these TnI phosphorylations can exhibit an expected or unexpected functional integration, whereby the function of two phosphorylations are different than that predicted from the combined function of each individual phosphorylation alone. We have shown that TnI Ser-23/24 and Ser-150 phosphorylation exhibit functional integration and are simultaneously increased in response to cardiac stress. In the current study, we investigated the functional integration of TnI Ser-23/24 and Ser-150 to alter cardiac contraction. We hypothesized that Ser-23/24 and Ser-150 phosphorylation each utilize distinct molecular mechanisms to alter the TnI binding affinity within the thin filament. Mathematical modeling predicts that Ser-23/24 and Ser-150 phosphorylation affect different TnI affinities within the thin filament to distinctly alter the Ca2+-binding properties of troponin. Protein binding experiments validate this assertion by demonstrating pseudo-phosphorylated Ser-150 decreases the affinity of isolated TnI for actin, whereas Ser-23/24 pseudo-phosphorylation is not different from unphosphorylated. Thus, our data supports that TnI Ser-23/24 affects TnI-TnC binding, while Ser-150 phosphorylation alters TnI-actin binding. By measuring force development in troponin-exchanged skinned myocytes, we demonstrate that the Ca2+ sensitivity of force is directly related to the amount of phosphate present on TnI. Furthermore, we demonstrate that Ser-150 pseudo-phosphorylation blunts Ser-23/24-mediated decreased Ca2+-sensitive force development whether on the same or different TnI molecule. Therefore, TnI phosphorylations can integrate across troponins along the myofilament. These data demonstrate that TnI Ser-23/24 and Ser-150 phosphorylation regulates muscle contraction in part by modulating different TnI interactions in the thin filament and it is the combination of these differential mechanisms that provides understanding of their functional integration. PMID:28018230
[Validation of a method for notifying and monitoring medication errors in pediatrics].
Guerrero-Aznar, M D; Jiménez-Mesa, E; Cotrina-Luque, J; Villalba-Moreno, A; Cumplido-Corbacho, R; Fernández-Fernández, L
2014-12-01
To analyze the impact of a multidisciplinary and decentralized safety committee in the pediatric management unit, and the joint implementation of a computing network application for reporting medication errors, monitoring the follow-up of the errors, and an analysis of the improvements introduced. An observational, descriptive, cross-sectional, pre-post intervention study was performed. An analysis was made of medication errors reported to the central safety committee in the twelve months prior to introduction, and those reported to the decentralized safety committee in the management unit in the nine months after implementation, using the computer application, and the strategies generated by the analysis of reported errors. Number of reported errors/10,000 days of stay, number of reported errors with harm per 10,000 days of stay, types of error, categories based on severity, stage of the process, and groups involved in the notification of medication errors. Reported medication errors increased 4.6 -fold, from 7.6 notifications of medication errors per 10,000 days of stay in the pre-intervention period to 36 in the post-intervention, rate ratio 0.21 (95% CI; 0.11-0.39) (P<.001). The medication errors with harm or requiring monitoring reported per 10,000 days of stay, was virtually unchanged from one period to the other ratio rate 0,77 (95% IC; 0,31-1,91) (P>.05). The notification of potential errors or errors without harm per 10,000 days of stay increased 17.4-fold (rate ratio 0.005., 95% CI; 0.001-0.026, P<.001). The increase in medication errors notified in the post-intervention period is a reflection of an increase in the motivation of health professionals to report errors through this new method. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopan, O; Kalet, A; Smith, W
2016-06-15
Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less
Single Event Rates for Devices Sensitive to Particle Energy
NASA Technical Reports Server (NTRS)
Edmonds, L. D.; Scheick, L. Z.; Banker, M. W.
2012-01-01
Single event rates (SER) can include contributions from low-energy particles such that the linear energy transfer (LET) is not constant. Previous work found that the environmental description that is most relevant to the low-energy contribution to the rate is a "stopping rate per unit volume" even when the physical mechanisms for a single-event effect do not require an ion to stop in some device region. Stopping rate tables are presented for four heavy-ion environments that are commonly used to assess device suitability for space applications. A conservative rate estimate utilizing limited test data is derived, and the example of SEGR rate in a power MOSFET is presented.
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Classification of echolocation clicks from odontocetes in the Southern California Bight.
Roch, Marie A; Klinck, Holger; Baumann-Pickering, Simone; Mellinger, David K; Qui, Simon; Soldevilla, Melissa S; Hildebrand, John A
2011-01-01
This study presents a system for classifying echolocation clicks of six species of odontocetes in the Southern California Bight: Visually confirmed bottlenose dolphins, short- and long-beaked common dolphins, Pacific white-sided dolphins, Risso's dolphins, and presumed Cuvier's beaked whales. Echolocation clicks are represented by cepstral feature vectors that are classified by Gaussian mixture models. A randomized cross-validation experiment is designed to provide conditions similar to those found in a field-deployed system. To prevent matched conditions from inappropriately lowering the error rate, echolocation clicks associated with a single sighting are never split across the training and test data. Sightings are randomly permuted before assignment to folds in the experiment. This allows different combinations of the training and test data to be used while keeping data from each sighting entirely in the training or test set. The system achieves a mean error rate of 22% across 100 randomized three-fold cross-validation experiments. Four of the six species had mean error rates lower than the overall mean, with the presumed Cuvier's beaked whale clicks showing the best performance (<2% error rate). Long-beaked common and bottlenose dolphins proved the most difficult to classify, with mean error rates of 53% and 68%, respectively.
Comparison of disagreement and error rates for three types of interdepartmental consultations.
Renshaw, Andrew A; Gould, Edwin W
2005-12-01
Previous studies have documented a relatively high rate of disagreement for interdepartmental consultations, but follow-up is limited. We reviewed the results of 3 types of interdepartmental consultations in our hospital during a 2-year period, including 328 incoming, 928 pathologist-generated outgoing, and 227 patient- or clinician-generated outgoing consults. The disagreement rate was significantly higher for incoming consults (10.7%) than for outgoing pathologist-generated consults (5.9%) (P = .06). Disagreement rates for outgoing patient- or clinician-generated consults were not significantly different from either other type (7.9%). Additional consultation, biopsy, or testing follow-up was available for 19 (54%) of 35, 14 (25%) of 55, and 6 (33%) of 18 incoming, outgoing pathologist-generated, and outgoing patient- or clinician-generated consults with disagreements, respectively; the percentage of errors varied widely (15/19 [79%], 8/14 [57%], and 2/6 [33%], respectively), but differences were not significant (P >.05 for each). Review of the individual errors revealed specific diagnostic areas in which improvement in performance might be made. Disagreement rates for interdepartmental consultation ranged from 5.9% to 10.7%, but only 33% to 79% represented errors. Additional consultation, tissue, and testing results can aid in distinguishing disagreements from errors.
Error-rate prediction for programmable circuits: methodology, tools and studied cases
NASA Astrophysics Data System (ADS)
Velazco, Raoul
2013-05-01
This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).
Vibrational fingerprinting of bacterial pathogens by surface enhanced Raman scattering (SERS)
NASA Astrophysics Data System (ADS)
Premasiri, W. Ranjith; Moir, D. T.; Ziegler, Lawrence D.
2005-05-01
The surface enhanced Raman scattering (SERS) spectra of vegetative whole-cell bacteria were obtained using in-situ grown gold nanoparticle cluster-covered silicon dioxide substrates excited at 785 nm. SERS spectra of Gram-negative bacteria; E. coli and S. typhimurium, and Gram-positive bacteria; B. subtilis, B. cereus, B. thuringeinsis and B. anthracis Sterne, have been observed. Raman enhancement factors of ~104-105 per cell are found for both Gram positive and Gram negative bacteria on this novel SERS substrate. The bacterial SERS spectra are species specific and exhibit greater species differentiation and reduced spectral congestion than their corresponding non-SERS (bulk) Raman spectra. Fluorescence observed in the 785 nm excited bulk Raman emission of Bacillus species is not apparent in the corresponding SERS spectra. The surface enhancement effect allows the observation of Raman spectra at the single cell level excited by low incident laser powers (< 3 mW) and short data acquisition times (~20 sec.). Comparison with previous SERS studies suggests that these SERS vibrational signatures are sensitively dependent on the specific morphology and nature of the SERS active substrate. Exposure to biological environments, such as human blood serum, has an observable effect on the bacterial SERS spectra. However, reproducible, species specific SERS vibrational fingerprints are still obtained. The potential of SERS for detection and identification of bacteria with species specificity on these gold nanoparticle coated substrates is demonstrated by these results.
Kawabata, Yutaka; Murata, Kousaku; Kawai, Shigeyuki
2015-12-25
Human mitochondrial NAD kinase is a crucial enzyme responsible for the synthesis of mitochondrial NADP(+). Despite its significance, little is known about the regulation of this enzyme in the mitochondria. Several putative and known phosphorylation sites within the protein have been found using phosphoproteomics, and here, we examined the effect of phosphomimetic mutations at six of these sites. The enzymatic activity was downregulated by a substitution of an Asp residue at Ser-289 and Ser-376, but not a substitution of Ala, suggesting that the phosphorylation of these residues downregulates the enzyme. Moreover, the activity was completely inhibited by substituting Ser-188 with an Asp, Glu, or in particular Ala, which highlights two possibilities: first, that Ser-188 is critical for catalytic activity, and second, that phosphorylation of Ser-188 inhibits the activity. Ser-188, Ser-289, and Ser-376 were found to be highly conserved in the primary structures of mitochondrial NAD kinase homologs in higher animals. Moreover, Ser-188 has been frequently detected in human and mouse phosphorylation site studies, whereas Ser-289 and Ser-376 have not. Taken together, this indicates that Ser-188 (and perhaps the other residues) is an important phosphorylation site that can downregulate the NAD kinase activity of this critical enzyme. Copyright © 2015 Elsevier Inc. All rights reserved.
Mosier-Boss, Pamela A.
2017-01-01
Surface enhanced Raman spectroscopy (SERS) has been widely used for chemical detection. Moreover, the inherent richness of the spectral data has made SERS attractive for use in detecting biological materials, including bacteria. This review discusses methods that have been used to obtain SERS spectra of bacteria. The kinds of SERS substrates employed to obtain SERS spectra are discussed as well as how bacteria interact with silver and gold nanoparticles. The roll of capping agents on Ag/Au NPs in obtaining SERS spectra is examined as well as the interpretation of the spectral data. PMID:29137201
Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.
Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M
2006-10-01
Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P < .001), and the sensitivity increased from 70.2% to 90.6% (P < .001). Cases with an immediate interpretation had a lower noninterpretable specimen rate than those without immediate interpretation (P < .001). Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.
Continuous quantum error correction for non-Markovian decoherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089
2007-08-15
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less
Olson, Stephen M; Hussaini, Mohammad; Lewis, James S
2011-05-01
Frozen section analysis is an essential tool for assessing margins intra-operatively to assure complete resection. Many institutions evaluate surgical defect edge tissue provided by the surgeon after the main lesion has been removed. With the increasing use of transoral laser microsurgery, this method is becoming even more prevalent. We sought to evaluate error rates at our large academic institution and to see if sampling errors could be reduced by the simple method change of taking an additional third section on these specimens. All head and neck tumor resection cases from January 2005 through August 2008 with margins evaluated by frozen section were identified by database search. These cases were analyzed by cutting two levels during frozen section and a third permanent section later. All resection cases from August 2008 through July 2009 were identified as well. These were analyzed by cutting three levels during frozen section (the third a 'much deeper' level) and a fourth permanent section later. Error rates for both of these periods were determined. Errors were separated into sampling and interpretation types. There were 4976 total frozen section specimens from 848 patients. The overall error rate was 2.4% for all frozen sections where just two levels were evaluated and was 2.5% when three levels were evaluated (P=0.67). The sampling error rate was 1.6% for two-level sectioning and 1.2% for three-level sectioning (P=0.42). However, when considering only the frozen section cases where tumor was ultimately identified (either at the time of frozen section or on permanent sections) the sampling error rate for two-level sectioning was 15.3 versus 7.4% for three-level sectioning. This difference was statistically significant (P=0.006). Cutting a single additional 'deeper' level at the time of frozen section identifies more tumor-bearing specimens and may reduce the number of sampling errors.
Global Vertical Rates from VLBl
NASA Technical Reports Server (NTRS)
Ma, Chopo; MacMillan, D.; Petrov, L.
2003-01-01
The analysis of global VLBI observations provides vertical rates for 50 sites with formal errors less than 2 mm/yr and median formal error of 0.4 mm/yr. These sites are largely in Europe and North America with a few others in east Asia, Australia, South America and South Africa. The time interval of observations is up to 20 years. The error of the velocity reference frame is less than 0.5 mm/yr, but results from several sites with observations from more than one antenna suggest that the estimated vertical rates may have temporal variations or non-geophysical components. Comparisons with GPS rates and corresponding site position time series will be discussed.
Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN
NASA Astrophysics Data System (ADS)
Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.
2016-12-01
In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.
Feedback on prescribing errors to junior doctors: exploring views, problems and preferred methods.
Bertels, Jeroen; Almoudaris, Alex M; Cortoos, Pieter-Jan; Jacklin, Ann; Franklin, Bryony Dean
2013-06-01
Prescribing errors are common in hospital inpatients. However, the literature suggests that doctors are often unaware of their errors as they are not always informed of them. It has been suggested that providing more feedback to prescribers may reduce subsequent error rates. Only few studies have investigated the views of prescribers towards receiving such feedback, or the views of hospital pharmacists as potential feedback providers. Our aim was to explore the views of junior doctors and hospital pharmacists regarding feedback on individual doctors' prescribing errors. Objectives were to determine how feedback was currently provided and any associated problems, to explore views on other approaches to feedback, and to make recommendations for designing suitable feedback systems. A large London NHS hospital trust. To explore views on current and possible feedback mechanisms, self-administered questionnaires were given to all junior doctors and pharmacists, combining both 5-point Likert scale statements and open-ended questions. Agreement scores for statements regarding perceived prescribing error rates, opinions on feedback, barriers to feedback, and preferences for future practice. Response rates were 49% (37/75) for junior doctors and 57% (57/100) for pharmacists. In general, doctors did not feel threatened by feedback on their prescribing errors. They felt that feedback currently provided was constructive but often irregular and insufficient. Most pharmacists provided feedback in various ways; however some did not or were inconsistent. They were willing to provide more feedback, but did not feel it was always effective or feasible due to barriers such as communication problems and time constraints. Both professional groups preferred individual feedback with additional regular generic feedback on common or serious errors. Feedback on prescribing errors was valued and acceptable to both professional groups. From the results, several suggested methods of providing feedback on prescribing errors emerged. Addressing barriers such as the identification of individual prescribers would facilitate feedback in practice. Research investigating whether or not feedback reduces the subsequent error rate is now needed.
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
[Diagnostic Errors in Medicine].
Buser, Claudia; Bankova, Andriyana
2015-12-09
The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.
On the robustness of bucket brigade quantum RAM
NASA Astrophysics Data System (ADS)
Arunachalam, Srinivasan; Gheorghiu, Vlad; Jochym-O'Connor, Tomas; Mosca, Michele; Varshinee Srinivasan, Priyaa
2015-12-01
We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti et al (2008 Phys. Rev. Lett.100 160501). Due to a result of Regev and Schiff (ICALP ’08 733), we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o({2}-n/2) (where N={2}n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion Harrow et al (2009 Phys. Rev. Lett.103 150502) or quantum machine learning Rebentrost et al (2014 Phys. Rev. Lett.113 130503) that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of ‘active’ gates, since all components have to be actively error corrected.
Error-Related Psychophysiology and Negative Affect
ERIC Educational Resources Information Center
Hajcak, G.; McDonald, N.; Simons, R.F.
2004-01-01
The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
Optimal Hotspots of Dynamic Surfaced-Enhanced Raman Spectroscopy for Drugs Quantitative Detection.
Yan, Xiunan; Li, Pan; Zhou, Binbin; Tang, Xianghu; Li, Xiaoyun; Weng, Shizhuang; Yang, Liangbao; Liu, Jinhuai
2017-05-02
Surface-enhanced Raman spectroscopy (SERS) as a powerful qualitative analysis method has been widely applied in many fields. However, SERS for quantitative analysis still suffers from several challenges partially because of the absence of stable and credible analytical strategy. Here, we demonstrate that the optimal hotspots created from dynamic surfaced-enhanced Raman spectroscopy (D-SERS) can be used for quantitative SERS measurements. In situ small-angle X-ray scattering was carried out to in situ real-time monitor the formation of the optimal hotspots, where the optimal hotspots with the most efficient hotspots were generated during the monodisperse Au-sol evaporating process. Importantly, the natural evaporation of Au-sol avoids the nanoparticles instability of salt-induced, and formation of ordered three-dimensional hotspots allows SERS detection with excellent reproducibility. Considering SERS signal variability in the D-SERS process, 4-mercaptopyridine (4-mpy) acted as internal standard to validly correct and improve stability as well as reduce fluctuation of signals. The strongest SERS spectra at the optimal hotspots of D-SERS have been extracted to statistics analysis. By using the SERS signal of 4-mpy as a stable internal calibration standard, the relative SERS intensity of target molecules demonstrated a linear response versus the negative logarithm of concentrations at the point of strongest SERS signals, which illustrates the great potential for quantitative analysis. The public drugs 3,4-methylenedioxymethamphetamine and α-methyltryptamine hydrochloride obtained precise analysis with internal standard D-SERS strategy. As a consequence, one has reason to believe our approach is promising to challenge quantitative problems in conventional SERS analysis.
Confidence Intervals for Error Rates Observed in Coded Communications Systems
NASA Astrophysics Data System (ADS)
Hamkins, J.
2015-05-01
We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Shihai; Lo, Chien-Chi; Li, Po-E
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
Feng, Shihai; Lo, Chien-Chi; Li, Po-E; ...
2016-02-29
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
Refractive errors in children and adolescents in Bucaramanga (Colombia).
Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly
2017-01-01
The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.
Weaver, Amy L; Stutzman, Sonja E; Supnet, Charlene; Olson, DaiWai M
2016-03-01
The emergency department (ED) is demanding and high risk. The impact of sleep quantity has been hypothesized to impact patient care. This study investigated the hypothesis that fatigue and impaired mentation, due to sleep disturbance and shortened overall sleeping hours, would lead to increased nursing errors. This is a prospective observational study of 30 ED nurses using self-administered survey and sleep architecture measured by wrist actigraphy as predictors of self-reported error rates. An actigraphy device was worn prior to working a 12-hour shift and nurses completed the Pittsburgh Sleep Quality Index (PSQI). Error rates were reported on a visual analog scale at the end of a 12-hour shift. The PSQI responses indicated that 73.3% of subjects had poor sleep quality. Lower sleep quality measured by actigraphy (hours asleep/hours in bed) was associated with higher self-perceived minor errors. Sleep quantity (total hours slept) was not associated with minor, moderate, nor severe errors. Our study found that ED nurses' sleep quality, immediately prior to a working 12-hour shift, is more predictive of error than sleep quantity. These results present evidence that a "good night's sleep" prior to working a nursing shift in the ED is beneficial for reducing minor errors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Advancing the research agenda for diagnostic error reduction.
Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep
2013-10-01
Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.
Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate
NASA Astrophysics Data System (ADS)
Chau, H. F.
2002-12-01
A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.
Comparison of a Virtual Older Driver Assessment with an On-Road Driving Test.
Eramudugolla, Ranmalee; Price, Jasmine; Chopra, Sidhant; Li, Xiaolan; Anstey, Kaarin J
2016-12-01
To design a low-cost simulator-based driving assessment for older adults and to compare its validity with that of an on-road driving assessment and other measures of older driver risk. Cross-sectional observational study. Canberra, Australia. Older adult drivers (N = 47; aged 65-88, mean age 75.2). Error rate on a simulated drive with environment and scoring procedure matched to those of an on-road test. Other measures included participant age, simulator sickness severity, neuropsychological measures, and driver screening measures. Outcome variables included occupational therapist (OT)-rated on-road errors, on-road safety rating, and safety category. Participants' error rate on the simulated drive was significantly correlated with their OT-rated driving safety (correlation coefficient (r) = -0.398, P = .006), even after adjustment for age and simulator sickness (P = .009). The simulator error rate was a significant predictor of categorization as unsafe on the road (P = .02, sensitivity 69.2%, specificity 100%), with 13 (27%) drivers assessed as unsafe. Simulator error was also associated with other older driver safety screening measures such as useful field of view (r = 0.341, P = .02), DriveSafe (r = -0.455, P < .01), and visual motion sensitivity (r = 0.368, P = .01) but was not associated with memory (delayed word recall) or global cognition (Mini-Mental State Examination). Drivers made twice as many errors on the simulated assessment as during the on-road assessment (P < .001), with significant differences in the rate and type of errors between the two mediums. A low-cost simulator-based assessment is valid as a screening instrument for identifying at-risk older drivers but not as an alternative to on-road evaluation when accurate data on competence or pattern of impairment is required for licensing decisions and training programs. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.
Report of the 1988 2-D Intercomparison Workshop, chapter 3
NASA Technical Reports Server (NTRS)
Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar
1989-01-01
Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.
Allan, Darcey M.; Lonigan, Christopher J.
2014-01-01
Although both the Continuous Performance Test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (Mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An ADHD-rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across four temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to one type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. PMID:25419645
Allan, Darcey M; Lonigan, Christopher J
2015-06-01
Although both the continuous performance test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An attention deficit/hyperactivity disorder (ADHD) rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across 4 temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to 1 type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. (c) 2015 APA, all rights reserved).
Cochran, Gary L; Barrett, Ryan S; Horn, Susan D
2016-08-01
The role of pharmacist transcription, onsite pharmacist dispensing, use of automated dispensing cabinets (ADCs), nurse-nurse double checks, or barcode-assisted medication administration (BCMA) in reducing medication error rates in critical access hospitals (CAHs) was evaluated. Investigators used the practice-based evidence methodology to identify predictors of medication errors in 12 Nebraska CAHs. Detailed information about each medication administered was recorded through direct observation. Errors were identified by comparing the observed medication administered with the physician's order. Chi-square analysis and Fisher's exact test were used to measure differences between groups of medication-dispensing procedures. Nurses observed 6497 medications being administered to 1374 patients. The overall error rate was 1.2%. The transcription error rates for orders transcribed by an onsite pharmacist were slightly lower than for orders transcribed by a telepharmacy service (0.10% and 0.33%, respectively). Fewer dispensing errors occurred when medications were dispensed by an onsite pharmacist versus any other method of medication acquisition (0.10% versus 0.44%, p = 0.0085). The rates of dispensing errors for medications that were retrieved from a single-cell ADC (0.19%), a multicell ADC (0.45%), or a drug closet or general supply (0.77%) did not differ significantly. BCMA was associated with a higher proportion of dispensing and administration errors intercepted before reaching the patient (66.7%) compared with either manual double checks (10%) or no BCMA or double check (30.4%) of the medication before administration (p = 0.0167). Onsite pharmacist dispensing and BCMA were associated with fewer medication errors and are important components of a medication safety strategy in CAHs. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Type I and Type II error concerns in fMRI research: re-balancing the scale
Cunningham, William A.
2009-01-01
Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017
Accuracy assessment of high-rate GPS measurements for seismology
NASA Astrophysics Data System (ADS)
Elosegui, P.; Davis, J. L.; Ekström, G.
2007-12-01
Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.
The incidence and severity of errors in pharmacist-written discharge medication orders.
Onatade, Raliat; Sawieres, Sara; Veck, Alexandra; Smith, Lindsay; Gore, Shivani; Al-Azeib, Sumiah
2017-08-01
Background Errors in discharge prescriptions are problematic. When hospital pharmacists write discharge prescriptions improvements are seen in the quality and efficiency of discharge. There is limited information on the incidence of errors in pharmacists' medication orders. Objective To investigate the extent and clinical significance of errors in pharmacist-written discharge medication orders. Setting 1000-bed teaching hospital in London, UK. Method Pharmacists in this London hospital routinely write discharge medication orders as part of the clinical pharmacy service. Convenient days, based on researcher availability, between October 2013 and January 2014 were selected. Pre-registration pharmacists reviewed all discharge medication orders written by pharmacists on these days and identified discrepancies between the medication history, inpatient chart, patient records and discharge summary. A senior clinical pharmacist confirmed the presence of an error. Each error was assigned a potential clinical significance rating (based on the NCCMERP scale) by a physician and an independent senior clinical pharmacist, working separately. Main outcome measure Incidence of errors in pharmacist-written discharge medication orders. Results 509 prescriptions, written by 51 pharmacists, containing 4258 discharge medication orders were assessed (8.4 orders per prescription). Ten prescriptions (2%), contained a total of ten erroneous orders (order error rate-0.2%). The pharmacist considered that one error had the potential to cause temporary harm (0.02% of all orders). The physician did not rate any of the errors with the potential to cause harm. Conclusion The incidence of errors in pharmacists' discharge medication orders was low. The quality, safety and policy implications of pharmacists routinely writing discharge medication orders should be further explored.
Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates
Bartroff, Jay; Song, Jinlin
2014-01-01
This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948
An experiment in software reliability: Additional analyses using data from automated replications
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Lauterbach, Linda A.
1988-01-01
A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.
Kim, Myoungsoo
2010-04-01
The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, p<.001). There was no significant difference in some categories for organizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
The Relationship among Correct and Error Oral Reading Rates and Comprehension.
ERIC Educational Resources Information Center
Roberts, Michael; Smith, Deborah Deutsch
1980-01-01
Eight learning disabled boys (10 to 12 years old) who were seriously deficient in both their oral reading and comprehension performances participated in the study which investigated, through an applied behavior analysis model, the interrelationships of three reading variables--correct oral reading rates, error oral reading rates, and percentage of…
Tucakov, Anna Katharina; Yavuz, Sabine; Schürmann, Eva-Maria; Mischler, Manjula; Klingebeil, Anne; Meyers, Gregor
2018-01-01
The classical swine fever virus (CSFV) represents one of the most important pathogens of swine. The CSFV glycoprotein E rns is an essential structural protein and an important virulence factor. The latter is dependent on the RNase activity of this envelope protein and, most likely, its secretion from the infected cell. A further important feature with regard to its function as a virulence factor is the formation of disulfide-linked E rns homodimers that are found in virus-infected cells and virions. Mutant CSFV lacking cysteine (Cys) 171, the residue responsible for intermolecular disulfide bond formation, were found to be attenuated in pigs (Tews BA, Schürmann EM, Meyers G. J Virol 2009;83:4823-4834). In the course of an animal experiment with such a dimerization-negative CSFV mutant, viruses were reisolated from pigs that contained a mutation of serine (Ser) 209 to Cys. This mutation restored the ability to form disulphide-linked E rns homodimers. In transient expression studies E rns mutants carrying the S209C change were found to form homodimers with about wt efficiency. Also the secretion level of the mutated proteins was equivalent to that of wt E rns . Virus mutants containing the Cys171Ser/Ser209Cys configuration exhibited wt growth rates and increased virulence when compared with the Cys171Ser mutant. These results provide further support for the connection between CSFV virulence and E rns dimerization.
NASA Technical Reports Server (NTRS)
Clinton, N. J. (Principal Investigator)
1980-01-01
Labeling errors made in the large area crop inventory experiment transition year estimates by Earth Observation Division image analysts are identified and quantified. The analysis was made from a subset of blind sites in six U.S. Great Plains states (Oklahoma, Kansas, Montana, Minnesota, North and South Dakota). The image interpretation basically was well done, resulting in a total omission error rate of 24 percent and a commission error rate of 4 percent. The largest amount of error was caused by factors beyond the control of the analysts who were following the interpretation procedures. The odd signatures, the largest error cause group, occurred mostly in areas of moisture abnormality. Multicrop labeling was tabulated showing the distribution of labeling for all crops.
Selective Formation of Ser-His Dipeptide via Phosphorus Activation
NASA Astrophysics Data System (ADS)
Shu, Wanyun; Yu, Yongfei; Chen, Su; Yan, Xia; Liu, Yan; Zhao, Yufen
2018-04-01
The Ser-His dipeptide is the shortest active peptide. This dipeptide not only hydrolyzes proteins and DNA but also catalyzes the formation of peptides and phosphodiester bonds. As a potential candidate for the prototype of modern hydrolase, Ser-His has attracted increasing attention. To explore if Ser-His could be obtained efficiently in the prebiotic condition, we investigated the reactions of N-DIPP-Ser with His or other amino acids in an aqueous system. We observed that N-DIPP-Ser incubated with His can form Ser-His more efficiently than with other amino acids. A synergistic effect involving the two side chains of Ser and His is presumed to be the critical factor for the selectivity of this specific peptide formation.
Assessing Telomere Length Using Surface Enhanced Raman Scattering
NASA Astrophysics Data System (ADS)
Zong, Shenfei; Wang, Zhuyuan; Chen, Hui; Cui, Yiping
2014-11-01
Telomere length can provide valuable insight into telomeres and telomerase related diseases, including cancer. Here, we present a brand-new optical telomere length measurement protocol using surface enhanced Raman scattering (SERS). In this protocol, two single strand DNA are used as SERS probes. They are labeled with two different Raman molecules and can specifically hybridize with telomeres and centromere, respectively. First, genome DNA is extracted from cells. Then the telomere and centromere SERS probes are added into the genome DNA. After hybridization with genome DNA, excess SERS probes are removed by magnetic capturing nanoparticles. Finally, the genome DNA with SERS probes attached is dropped onto a SERS substrate and subjected to SERS measurement. Longer telomeres result in more attached telomere probes, thus a stronger SERS signal. Consequently, SERS signal can be used as an indicator of telomere length. Centromere is used as the inner control. By calibrating the SERS intensity of telomere probe with that of the centromere probe, SERS based telomere measurement is realized. This protocol does not require polymerase chain reaction (PCR) or electrophoresis procedures, which greatly simplifies the detection process. We anticipate that this easy-operation and cost-effective protocol is a fine alternative for the assessment of telomere length.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Quantitative evaluation of patient-specific quality assurance using online dosimetry system
NASA Astrophysics Data System (ADS)
Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk
2018-01-01
In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Global distortion of GPS networks associated with satellite antenna model errors
NASA Astrophysics Data System (ADS)
Cardellach, E.; Elósegui, P.; Davis, J. L.
2007-07-01
Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by ˜1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PCO errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm yr-1 level, which will impact high-precision crustal deformation studies.
Global Distortion of GPS Networks Associated with Satellite Antenna Model Errors
NASA Technical Reports Server (NTRS)
Cardellach, E.; Elosequi, P.; Davis, J. L.
2007-01-01
Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by approx.1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PC0 errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm/yr level, which will impact high-precision crustal deformation studies.
Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran
2016-11-01
Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.
Analysis of the “naming game” with learning errors in communications
NASA Astrophysics Data System (ADS)
Lou, Yang; Chen, Guanrong
2015-07-01
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
Estimating error rates for firearm evidence identifications in forensic science
Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan
2018-01-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680
Syndromic surveillance for health information system failures: a feasibility study
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-01-01
Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193
Estimating error rates for firearm evidence identifications in forensic science.
Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan
2018-03-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.
Data quality in a DRG-based information system.
Colin, C; Ecochard, R; Delahaye, F; Landrivon, G; Messy, P; Morgon, E; Matillon, Y
1994-09-01
The aim of this study initiated in May 1990 was to evaluate the quality of the medical data collected from the main hospital of the "Hospices Civils de Lyon", Edouard Herriot Hospital. We studied a random sample of 593 discharge abstracts from 12 wards of the hospital. Quality control was performed by checking multi-hospitalized patients' personal data, checking that each discharge abstract was exhaustive, examining the quality of abstracting, studying diagnoses and medical procedures coding, and checking data entry. Assessment of personal data showed a 4.4% error rate. It was mainly accounted for by spelling mistakes in surnames and first names, and mistakes in dates of birth. The quality of a discharge abstract was estimated according to the two purposes of the medical information system: description of hospital morbidity per patient and Diagnosis Related Group's case mix. Error rates in discharge abstracts were expressed in two ways: an overall rate for errors of concordance between Discharge Abstracts and Medical Records, and a specific rate for errors modifying classification in Diagnosis Related Groups (DRG). For abstracting medical information, these error rates were 11.5% (SE +/- 2.2) and 7.5% (SE +/- 1.9) respectively. For coding diagnoses and procedures, they were 11.4% (SE +/- 1.5) and 1.3% (SE +/- 0.5) respectively. For data entry on the computerized data base, the error rate was 2% (SE +/- 0.5) and 0.2% (SE +/- 0.05). Quality control must be performed regularly because it demonstrates the degree of participation from health care teams and the coherence of the database.(ABSTRACT TRUNCATED AT 250 WORDS)
Non-health care facility anticonvulsant medication errors in the United States.
DeDonato, Emily A; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A
2018-06-01
This study provides an epidemiological description of non-health care facility medication errors involving anticonvulsant drugs. A retrospective analysis of National Poison Data System data was conducted on non-health care facility medication errors involving anticonvulsant drugs reported to US Poison Control Centers from 2000 through 2012. During the study period, 108,446 non-health care facility medication errors involving anticonvulsant pharmaceuticals were reported to US Poison Control Centers, averaging 8342 exposures annually. The annual frequency and rate of errors increased significantly over the study period, by 96.6 and 76.7%, respectively. The rate of exposures resulting in health care facility use increased by 83.3% and the rate of exposures resulting in serious medical outcomes increased by 62.3%. In 2012, newer anticonvulsants, including felbamate, gabapentin, lamotrigine, levetiracetam, other anticonvulsants (excluding barbiturates), other types of gamma aminobutyric acid, oxcarbazepine, topiramate, and zonisamide, accounted for 67.1% of all exposures. The rate of non-health care facility anticonvulsant medication errors reported to Poison Control Centers increased during 2000-2012, resulting in more frequent health care facility use and serious medical outcomes. Newer anticonvulsants, although often considered safer and more easily tolerated, were responsible for much of this trend and should still be administered with caution.
Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R
2014-01-01
We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.
Rajagopal, Senthilkumar; Burton, Brittney K; Fields, Blanche L; El, India O; Kamatchi, Ganesan L
2017-05-01
Protein kinase C (PKC) isozymes modulate voltage-gated calcium (Ca v ) currents through Ca v 2.2 and Ca v 2.3 channels by targeting serine/threonine (Ser/Thr) phosphorylation sites of Ca v α 1 subunits. Stimulatory (Thr-422, Ser-2108 and Ser-2132) and inhibitory (Ser-425) sites were identified in the Ca v 2.2α 1 subunits to PKCs βII and ε. In the current study, we investigated if the homologous sites of Ca v 2.3α 1 subunits (stimulatory: Thr-365, Ser-1995 and Ser-2011; inhibitory: Ser-369) behaved in similar manner. Several Ala and Asp mutants were constructed in Ca v 2.3α 1 subunits in such a way that the Ser/Thr sites can be examined in isolation. These mutants or WT Ca v 2.3α 1 along with auxiliary β 1b and α 2 /δ subunits were expressed in Xenopus oocytes and the effects of PKCs βII and ε studied on the barium current (I Ba ). Among these sites, stimulatory Thr-365 and Ser-1995 and inhibitory Ser-369 behaved similar to their homologs in Ca v 2.2α 1 subunits. Furthermore PKCs produced neither stimulation nor inhibition when stimulatory Thr-365 or Ser-1995 and inhibitory Ser-369 were present together. However, the PKCs potentiated the I Ba when two stimulatory sites, Thr-365 and Ser-1995 were present together, thus overcoming the inhibitory effect of Ser-369. Taken together net PKC effect may be the difference between the responses of the stimulatory and inhibitory sites. Copyright © 2017 Elsevier Inc. All rights reserved.
Collier, Mary E. W.; Ettelaie, Camille
2011-01-01
The mechanisms that regulate the incorporation and release of tissue factors (TFs) into cell-derived microparticles are as yet unidentified. In this study, we have explored the regulation of TF release into microparticles by the phosphorylation of serine residues within the cytoplasmic domain of TF. Wild-type and mutant forms of TF, containing alanine and aspartate substitutions at Ser253 and Ser258, were overexpressed in coronary artery and dermal microvascular endothelial cells and microparticle release stimulated with PAR2 agonist peptide (PAR2-AP). The release of TF antigen and activity was then monitored. In addition, the phosphorylation state of the two serine residues within the released microparticles and the cells was monitored for 150 min. The release of wild-type TF as procoagulant microparticles peaked at 90 min and declined thereafter in both cell types. The TF within these microparticles was phosphorylated at Ser253 but not at Ser258. Aspartate substitution of Ser253 resulted in rapid release of TF antigen but not activity, whereas TF release was reduced and delayed by alanine substitution of Ser253 or aspartate substitution of Ser258. Alanine substitution of Ser258 prolonged the release of TF following PAR2-AP activation. The release of TF was concurrent with phosphorylation of Ser253 and was followed by dephosphorylation at 120 min and phosphorylation of Ser258. We propose a sequential mechanism in which the phosphorylation of Ser253 through PAR2 activation results in the incorporation of TF into microparticles, simultaneously inducing Ser258 phosphorylation. Phosphorylation of Ser258 in turn promotes the dephosphorylation of Ser253 and suppresses the release of TF. PMID:21310953
Liang, Xujun; Guo, Chuling; Liao, Changjun; Liu, Shasha; Wick, Lukas Y; Peng, Dan; Yi, Xiaoyun; Lu, Guining; Yin, Hua; Lin, Zhang; Dang, Zhi
2017-06-01
Surfactant-enhanced remediation (SER) is considered as a promising and efficient remediation approach. This review summarizes and discusses main drivers on the application of SER in removing polycyclic aromatic hydrocarbons (PAHs) from contaminated soil and water. The effect of PAH-PAH interactions on SER efficiency is, for the first time, illustrated in an SER review. Interactions between mixed PAHs could enhance, decrease, or have no impact on surfactants' solubilization power towards PAHs, thus affecting the optimal usage of surfactants for SER. Although SER can transfer PAHs from soil/non-aqueous phase liquids to the aqueous phase, the harmful impact of PAHs still exists. To decrease the level of PAHs in SER solutions, a series of SER-based integrated cleanup technologies have been developed including surfactant-enhanced bioremediation (SEBR), surfactant-enhanced phytoremediation (SEPR) and SER-advanced oxidation processes (SER-AOPs). In this review, the general considerations and corresponding applications of the integrated cleanup technologies are summarized and discussed. Compared with SER-AOPs, SEBR and SEPR need less operation cost, yet require more treatment time. To successfully achieve the field application of surfactant-based technologies, massive production of the cost-effective green surfactants (i.e. biosurfactants) and comprehensive evaluation of the drivers and the global cost of SER-based cleanup technologies need to be performed in the future. Copyright © 2017. Published by Elsevier Ltd.
Chivers, Claire E.; Koner, Apurba L.; Lowe, Edward D.; Howarth, Mark
2011-01-01
The interaction between SA (streptavidin) and biotin is one of the strongest non-covalent interactions in Nature. SA is a widely used tool and a paradigm for protein–ligand interactions. We previously developed a SA mutant, termed Tr (traptavidin), possessing a 10-fold lower off-rate for biotin, with increased mechanical and thermal stability. In the present study, we determined the crystal structures of apo-Tr and biotin–Tr at 1.5 Å resolution. In apo-SA the loop (L3/4), near biotin's valeryl tail, is typically disordered and open, but closes upon biotin binding. In contrast, L3/4 was shut in both apo-Tr and biotin–Tr. The reduced flexibility of L3/4 and decreased conformational change on biotin binding provide an explanation for Tr's reduced biotin off- and on-rates. L3/4 includes Ser45, which forms a hydrogen bond to biotin consistently in Tr, but erratically in SA. Reduced breakage of the biotin–Ser45 hydrogen bond in Tr is likely to inhibit the initiating event in biotin's dissociation pathway. We generated a Tr with a single biotin-binding site rather than four, which showed a simi-larly low off-rate, demonstrating that Tr's low off-rate was governed by intrasubunit effects. Understanding the structural features of this tenacious interaction may assist the design of even stronger affinity tags and inhibitors. PMID:21241253
NASA Astrophysics Data System (ADS)
Itoh, Tamitake; Yamamoto, Yuko S.
2017-11-01
Electronic transition rates of a molecule located at a crevasse or a gap of a plasmonic nanoparticle (NP) dimer are largely enhanced up to the factor of around 106 due to electromagnetic (EM) coupling between plasmonic and molecular electronic resonances. The coupling rate is determined by mode density of the EM fields at the crevasse and the oscillator strength of the local electronic resonance of a molecule. The enhancement by EM coupling at a gap of plasmonic NP dimer enables us single molecule (SM) Raman spectroscopy. Recently, this type of research has entered a new regime wherein EM enhancement effects cannot be treated by conventional theorems, namely EM mechanism. Thus, such theorems used for the EM enhancement effect should be re-examined. We here firstly summarize EM mechanism by using surface-enhanced Raman scattering (SERS), which is common in EM enhancement phenomena. Secondly, we focus on recent two our studies on probing SM fluctuation by SERS within the spatial resolution of sub-nanometer scales. Finally, we discuss the necessity of re-examining the EM mechanism with respect to two-fold breakdowns of the weak coupling assumption: the breakdown of Kasha's rule induced by the ultra-fast plasmonic de-excitation and the breakdown of the weak coupling by EM coupling rates exceeding both the plasmonic and molecular excitonic dephasing rates.
Development of a press and drag method for hyperlink selection on smartphones.
Chang, Joonho; Jung, Kihyo
2017-11-01
The present study developed a novel touch method for hyperlink selection on smartphones consisting of two sequential finger interactions: press and drag motions. The novel method requires a user to press a target hyperlink, and if a touch error occurs he/she can immediately correct the touch error by dragging the finger without releasing it in the middle. The method was compared with two existing methods in terms of completion time, error rate, and subjective rating. Forty college students participated in the experiments with different hyperlink sizes (4-pt, 6-pt, 8-pt, and 10-pt) on a touch-screen device. When hyperlink size was small (4-pt and 6-pt), the novel method (time: 826 msec; error: 0.6%) demonstrated better completion time and error rate than the current method (time: 1194 msec; error: 22%). In addition, the novel method (1.15, slightly satisfied, in 7-pt bipolar scale) had significantly higher satisfaction scores than the two existing methods (0.06, neutral). Copyright © 2017 Elsevier Ltd. All rights reserved.
Structural origin of fractional Stokes-Einstein relation in glass-forming liquids
NASA Astrophysics Data System (ADS)
Pan, Shaopeng; Wu, Z. W.; Wang, W. H.; Li, M. Z.; Xu, Limei
2017-01-01
In many glass-forming liquids, fractional Stokes-Einstein relation (SER) is observed above the glass transition temperature. However, the origin of such phenomenon remains elusive. Using molecular dynamics simulations, we investigate the break- down of SER and the onset of fractional SER in a model of metallic glass-forming liquid. We find that SER breaks down when the size of the largest cluster consisting of trapped atoms starts to increase sharply at which the largest cluster spans half of the simulations box along one direction, and the fractional SER starts to follows when the largest cluster percolates the entire system and forms 3-dimentional network structures. Further analysis based on the percolation theory also confirms that percolation occurs at the onset of the fractional SER. Our results directly link the breakdown of the SER with structure inhomogeneity and onset of the fraction SER with percolation of largest clusters, thus provide a possible picture for the break- down of SER and onset of fractional SER in glass-forming liquids, which is is important for the understanding of the dynamic properties in glass-forming liquids.
MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery
2016-04-01
The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process: prescribing, transcription, preparation, and administration. There were no transcription errors, and most (95%) errors occurred during administration. We conclude that PN practices that conferred a meaningful cost reduction and a lower error rate (2.7/1000 PN) than reported in the literature (15.6/1000 PN) were ascribed to the development and implementation of practices that conform to national PN guidelines and recommendations. Electronic ordering and compounding programs eliminated all transcription and related opportunities for errors. © 2015 American Society for Parenteral and Enteral Nutrition.
Surface enhanced Raman gene probe and methods thereof
Vo-Dinh, T.
1998-02-24
The subject invention disclosed is a new gene probe biosensor and methods based on surface enhanced Raman scattering (SERS) label detection. The SER gene probe biosensor comprises a support means, a SER gene probe having at least one oligonucleotide strand labeled with at least one SERS label, and a SERS active substrate disposed on the support means and having at least one of the SER gene probes adsorbed thereon. Biotargets such as bacterial and viral DNA, RNA and PNA are detected using a SER gene probe via hybridization to oligonucleotide strands complementary to the SER gene probe. The support means includes a fiberoptic probe, an array of fiberoptic probes for performance of multiple assays and a waveguide microsensor array with charge-coupled devices or photodiode arrays. 18 figs.
Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina
2017-07-01
A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.
High speed and adaptable error correction for megabit/s rate quantum key distribution.
Dixon, A R; Sato, H
2014-12-02
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.
High speed and adaptable error correction for megabit/s rate quantum key distribution
Dixon, A. R.; Sato, H.
2014-01-01
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niven, W.A.
The long-term position accuracy of an inertial navigation system depends primarily on the ability of the gyroscopes to maintain a near-perfect reference orientation. Small imperfections in the gyroscopes cause them to drift slowly away from their initial orientation, thereby producing errors in the system's calculations of position. The A3FIX is a computer program subroutine developed to estimate inertial navigation system gyro drift rates with the navigator stopped or moving slowly. It processes data of the navigation system's position error to arrive at estimates of the north- south and vertical gyro drift rates. It also computes changes in the east--west gyromore » drift rate if the navigator is stopped and if data on the system's azimuth error changes are also available. The report describes the subroutine, its capabilities, and gives examples of gyro drift rate estimates that were computed during the testing of a high quality inertial system under the PASSPORT program at the Lawrence Livermore Laboratory. The appendices provide mathematical derivations of the estimation equations that are used in the subroutine, a discussion of the estimation errors, and a program listing and flow diagram. The appendices also contain a derivation of closed form solutions to the navigation equations to clarify the effects that motion and time-varying drift rates induce in the phase-plane relationships between the Schulerfiltered errors in latitude and azimuth snd between the Schulerfiltered errors in latitude and longitude. (auth)« less
ERIC Educational Resources Information Center
Brown, Darine F.; Hartman, Bruce
1980-01-01
Investigated issues associated with stimulating increased return rates to a mail questionnaire among school counselors. Results show that as the number of incentives received increased, the return rates increased in a linear fashion. The incentives did not introduce response error or affect the reliability of the Counselor Function Inventory.…
The Sustained Influence of an Error on Future Decision-Making.
Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel
2017-01-01
Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.
del Val, Coral; White, Stephen H.
2014-01-01
We combined systematic bioinformatics analyses and molecular dynamics simulations to assess the conservation patterns of Ser and Thr motifs in membrane proteins, and the effect of such motifs on the structure and dynamics of α-helical transmembrane (TM) segments. We find that Ser/Thr motifs are often present in β-barrel TM proteins. At least one Ser/Thr motif is present in almost half of the sequences of α-helical proteins analyzed here. The extensive bioinformatics analyses and inspection of protein structures led to the identification of molecular transporters with noticeable numbers of Ser/Thr motifs within the TM region. Given the energetic penalty for burying multiple Ser/Thr groups in the membrane hydrophobic core, the observation of transporters with multiple membrane-embedded Ser/Thr is intriguing and raises the question of how the presence of multiple Ser/Thr affects protein local structure and dynamics. Molecular dynamics simulations of four different Ser-containing model TM peptides indicate that backbone hydrogen bonding of membrane-buried Ser/Thr hydroxyl groups can significantly change the local structure and dynamics of the helix. Ser groups located close to the membrane interface can hydrogen bond to solvent water instead of protein backbone, leading to an enhanced local solvation of the peptide. PMID:22836667
Online patient safety education programme for junior doctors: is it worthwhile?
McCarthy, S E; O'Boyle, C A; O'Shaughnessy, A; Walsh, G
2016-02-01
Increasing demand exists for blended approaches to the development of professionalism. Trainees of the Royal College of Physicians of Ireland participated in an online patient safety programme. Study aims were: (1) to determine whether the programme improved junior doctors' knowledge, attitudes and skills relating to error reporting, open communication and care for the second victim and (2) to establish whether the methodology facilitated participants' learning. 208 junior doctors who completed the programme completed a pre-online questionnaire. Measures were "patient safety knowledge and attitudes", "medical safety climate" and "experience of learning". Sixty-two completed the post-questionnaire, representing a 30 % matched response rate. Participating in the programme resulted in immediate (p < 0.01) improvement in skills such as knowing when and how to complete incident forms and disclosing errors to patients, in self-rated knowledge (p < 0.01) and attitudes towards error reporting (p < 0.01). Sixty-three per cent disagreed that doctors routinely report medical errors and 42 % disagreed that doctors routinely share information about medical errors and what caused them. Participants rated interactive features as the most positive elements of the programme. An online training programme on medical error improved self-rated knowledge, attitudes and skills in junior doctors and was deemed an effective learning tool. Perceptions of work issues such as a poor culture of error reporting among doctors may prevent improved attitudes being realised in practice. Online patient safety education has a role in practice-based initiatives aimed at developing professionalism and improving safety.
Decreasing patient identification band errors by standardizing processes.
Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie
2013-04-01
Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.
Analysis of limiting information characteristics of quantum-cryptography protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sych, D V; Grishanin, Boris A; Zadkov, Viktor N
2005-01-31
The problem of increasing the critical error rate of quantum-cryptography protocols by varying a set of letters in a quantum alphabet for space of a fixed dimensionality is studied. Quantum alphabets forming regular polyhedra on the Bloch sphere and the continual alphabet equally including all the quantum states are considered. It is shown that, in the absence of basis reconciliation, a protocol with the tetrahedral alphabet has the highest critical error rate among the protocols considered, while after the basis reconciliation, a protocol with the continual alphabet possesses the highest critical error rate. (quantum optics and quantum computation)
History, Epidemic Evolution, and Model Burn-In for a Network of Annual Invasion: Soybean Rust.
Sanatkar, M R; Scoglio, C; Natarajan, B; Isard, S A; Garrett, K A
2015-07-01
Ecological history may be an important driver of epidemics and disease emergence. We evaluated the role of history and two related concepts, the evolution of epidemics and the burn-in period required for fitting a model to epidemic observations, for the U.S. soybean rust epidemic (caused by Phakopsora pachyrhizi). This disease allows evaluation of replicate epidemics because the pathogen reinvades the United States each year. We used a new maximum likelihood estimation approach for fitting the network model based on observed U.S. epidemics. We evaluated the model burn-in period by comparing model fit based on each combination of other years of observation. When the miss error rates were weighted by 0.9 and false alarm error rates by 0.1, the mean error rate did decline, for most years, as more years were used to construct models. Models based on observations in years closer in time to the season being estimated gave lower miss error rates for later epidemic years. The weighted mean error rate was lower in backcasting than in forecasting, reflecting how the epidemic had evolved. Ongoing epidemic evolution, and potential model failure, can occur because of changes in climate, host resistance and spatial patterns, or pathogen evolution.
Shah, Priya; Wyatt, Jeremy C; Makubate, Boikanyo; Cross, Frank W
2011-01-01
Objective Expert authorities recommend clinical decision support systems to reduce prescribing error rates, yet large numbers of insignificant on-screen alerts presented in modal dialog boxes persistently interrupt clinicians, limiting the effectiveness of these systems. This study compared the impact of modal and non-modal electronic (e-) prescribing alerts on prescribing error rates, to help inform the design of clinical decision support systems. Design A randomized study of 24 junior doctors each performing 30 simulated prescribing tasks in random order with a prototype e-prescribing system. Using a within-participant design, doctors were randomized to be shown one of three types of e-prescribing alert (modal, non-modal, no alert) during each prescribing task. Measurements The main outcome measure was prescribing error rate. Structured interviews were performed to elicit participants' preferences for the prescribing alerts and their views on clinical decision support systems. Results Participants exposed to modal alerts were 11.6 times less likely to make a prescribing error than those not shown an alert (OR 11.56, 95% CI 6.00 to 22.26). Those shown a non-modal alert were 3.2 times less likely to make a prescribing error (OR 3.18, 95% CI 1.91 to 5.30) than those not shown an alert. The error rate with non-modal alerts was 3.6 times higher than with modal alerts (95% CI 1.88 to 7.04). Conclusions Both kinds of e-prescribing alerts significantly reduced prescribing error rates, but modal alerts were over three times more effective than non-modal alerts. This study provides new evidence about the relative effects of modal and non-modal alerts on prescribing outcomes. PMID:21836158
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.
2004-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).
Impact of geophysical model error for recovering temporal gravity field model
NASA Astrophysics Data System (ADS)
Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang
2016-07-01
The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.
Wang, Jun Feng; Wu, Xue Zhong; Xiao, Rui; Dong, Pei Tao; Wang, Chao Guang
2014-01-01
A new high-performance surface-enhanced Raman scattering (SERS) substrate with extremely high SERS activity was produced. This SERS substrate combines the advantages of Au film over nanosphere (AuFON) substrate and Ag nanoparticles (AgNPs). A three order enhancement of SERS was observed when Rhodamine 6G (R6G) was used as a probe molecule to compare the SERS effects of the new substrate and commonly used AuFON substrate. These new SERS substrates can detect R6G down to 1 nM. The new substrate was also utilized to detect melamine, and the limit of detection (LOD) is 1 ppb. A linear relationship was also observed between the SERS intensity at Raman peak 682 cm−1 and the logarithm of melamine concentrations ranging from 10 ppm to 1 ppb. This ultrasensitive SERS substrate is a promising tool for detecting trace chemical molecules because of its simple and effective fabrication procedure, high sensitivity and high reproducibility of the SERS effect. PMID:24886913
Au coated PS nanopillars as a highly ordered and reproducible SERS substrate
NASA Astrophysics Data System (ADS)
Kim, Yong-Tae; Schilling, Joerg; Schweizer, Stefan L.; Sauer, Guido; Wehrspohn, Ralf B.
2017-07-01
Noble metal nanostructures with nanometer gap size provide strong surface-enhanced Raman scattering (SERS) which can be used to detect trace amounts of chemical and biological molecules. Although several approaches were reported to obtain active SERS substrates, it still remains a challenge to fabricate SERS substrates with high sensitivity and reproducibility using low-cost techniques. In this article, we report on the fabrication of Au sputtered PS nanopillars based on a template synthetic method as highly ordered and reproducible SERS substrates. The SERS substrates are fabricated by anodic aluminum oxide (AAO) template-assisted infiltration of polystyrene (PS) resulting in hemispherical structures, and a following Au sputtering process. The optimum gap size between adjacent PS nanopillars and thickness of the Au layers for high SERS sensitivity are investigated. Using the Au sputtered PS nanopillars as an active SERS substrate, the Raman signal of 4-methylbenzenethiol (4-MBT) with a concentration down to 10-9 M is identified with good signal reproducibility, showing great potential as promising tool for SERS-based detection.
Wang, Jun Feng; Wu, Xue Zhong; Xiao, Rui; Dong, Pei Tao; Wang, Chao Guang
2014-01-01
A new high-performance surface-enhanced Raman scattering (SERS) substrate with extremely high SERS activity was produced. This SERS substrate combines the advantages of Au film over nanosphere (AuFON) substrate and Ag nanoparticles (AgNPs). A three order enhancement of SERS was observed when Rhodamine 6G (R6G) was used as a probe molecule to compare the SERS effects of the new substrate and commonly used AuFON substrate. These new SERS substrates can detect R6G down to 1 nM. The new substrate was also utilized to detect melamine, and the limit of detection (LOD) is 1 ppb. A linear relationship was also observed between the SERS intensity at Raman peak 682 cm(-1) and the logarithm of melamine concentrations ranging from 10 ppm to 1 ppb. This ultrasensitive SERS substrate is a promising tool for detecting trace chemical molecules because of its simple and effective fabrication procedure, high sensitivity and high reproducibility of the SERS effect.
HPTLC-FLD-SERS as a facile and reliable screening tool: Exemplarily shown with tyramine in cheese.
Wang, Liao; Xu, Xue-Ming; Chen, Yi-Sheng; Ren, Jie; Liu, Yun-Tao
2018-04-01
The serious cytotoxicity of tyramine attracted marked attention as it induced necrosis of human intestinal cells. This paper presented a novel and facile high performance thin-layer chromatography (HPTLC) method tailored for screening tyramine in cheese. Separation was performed on glass backed silica gel plates, using methanol/ethyl acetate/ammonia (6/4/1 v/v/v) as the mobile phase. Special efforts were focused on optimizing conditions (substrate preparation, laser wavelength, salt types and concentrations) of surface enhanced Raman spectroscopy (SERS) measurements directly on plates after derivatization, which enabled molecule-specific identification of targeted bands. In parallel, fluorescent densitometry (FLD) scanning at 380400 nm offered satisfactory quantitative performances (LOD 9 ng/zone, LOQ 17 ng/zone, linearity 0.9996 and %RSD 6.7). Including a quick extraction/cleanup step, the established method was successfully validated with different cheese samples, both qualitatively (straightforward confirmation) and quantitatively (recovery rates from 83.7 to 108.5%). Beyond this application, HPTLC-FLD-SERS provided a new horizon in fast and reliable screening of sophisticated samples like food and herb drugs, striking an excellent balance between specificity, sensitivity and simplicity. Copyright © 2017. Published by Elsevier B.V.
Surface-enhanced Raman spectroscopic monitor of triglyceride hydrolysis in a skin pore phantom
NASA Astrophysics Data System (ADS)
Weldon, Millicent K.; Morris, Michael D.
1999-04-01
Bacterial hydrolysis of triglycerides is followed in a sebum probe phantom by microprobe surface-enhanced Raman scattering (SERS) spectroscopy. The phantom consists of a purpose-built syringe pump operating at physiological flow rates connected to a 300 micron i.d. capillary. We employ silicon substrate SERS microprobes to monitor the hydrolysis products. The silicon support allows some tip flexibility that makes these probes ideal for insertion into small structures. Propionibacterium acnes are immobilized on the inner surface of the capillary. These bacteria hydrolyze the triglycerides in a model sebum emulsion flowing through the capillary. The transformation is followed in vitro as changes in the SERS caused by hydrolysis of triglyceride to fatty acid. The breakdown products consists of a mixture of mono- and diglycerides and their parent long chain fatty acids. The fatty acids adsorb as their carboxylates and can be readily identified by their characteristic spectra. The technique can also confirm the presence of bacteria by detection of short chain carboxylic acids released as products of glucose fermentation during the growth cycle of these cells. Co-adsorption of propionate is observed. Spatial localization of the bacteria is obtained by ex-situ line imaging of the probe.
NASA Astrophysics Data System (ADS)
Hwang, Joonki; Lee, Sangyeop; Choo, Jaebum
2016-06-01
A novel surface-enhanced Raman scattering (SERS)-based lateral flow immunoassay (LFA) biosensor was developed to resolve problems associated with conventional LFA strips (e.g., limits in quantitative analysis and low sensitivity). In our SERS-based biosensor, Raman reporter-labeled hollow gold nanospheres (HGNs) were used as SERS detection probes instead of gold nanoparticles. With the proposed SERS-based LFA strip, the presence of a target antigen can be identified through a colour change in the test zone. Furthermore, highly sensitive quantitative evaluation is possible by measuring SERS signals from the test zone. To verify the feasibility of the SERS-based LFA strip platform, an immunoassay of staphylococcal enterotoxin B (SEB) was performed as a model reaction. The limit of detection (LOD) for SEB, as determined with the SERS-based LFA strip, was estimated to be 0.001 ng mL-1. This value is approximately three orders of magnitude more sensitive than that achieved with the corresponding ELISA-based method. The proposed SERS-based LFA strip sensor shows significant potential for the rapid and sensitive detection of target markers in a simplified manner.A novel surface-enhanced Raman scattering (SERS)-based lateral flow immunoassay (LFA) biosensor was developed to resolve problems associated with conventional LFA strips (e.g., limits in quantitative analysis and low sensitivity). In our SERS-based biosensor, Raman reporter-labeled hollow gold nanospheres (HGNs) were used as SERS detection probes instead of gold nanoparticles. With the proposed SERS-based LFA strip, the presence of a target antigen can be identified through a colour change in the test zone. Furthermore, highly sensitive quantitative evaluation is possible by measuring SERS signals from the test zone. To verify the feasibility of the SERS-based LFA strip platform, an immunoassay of staphylococcal enterotoxin B (SEB) was performed as a model reaction. The limit of detection (LOD) for SEB, as determined with the SERS-based LFA strip, was estimated to be 0.001 ng mL-1. This value is approximately three orders of magnitude more sensitive than that achieved with the corresponding ELISA-based method. The proposed SERS-based LFA strip sensor shows significant potential for the rapid and sensitive detection of target markers in a simplified manner. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07243c
Makhmoudova, Amina; Williams, Declan; Brewer, Dyanne; Massey, Sarah; Patterson, Jenelle; Silva, Anjali; Vassall, Kenrick A.; Liu, Fushan; Subedi, Sanjeena; Harauz, George; Siu, K. W. Michael; Tetlow, Ian J.; Emes, Michael J.
2014-01-01
Starch branching enzyme IIb (SBEIIb) plays a crucial role in amylopectin biosynthesis in maize endosperm by defining the structural and functional properties of storage starch and is regulated by protein phosphorylation. Native and recombinant maize SBEIIb were used as substrates for amyloplast protein kinases to identify phosphorylation sites on the protein. A multidisciplinary approach involving bioinformatics, site-directed mutagenesis, and mass spectrometry identified three phosphorylation sites at Ser residues: Ser649, Ser286, and Ser297. Two Ca2+-dependent protein kinase activities were partially purified from amyloplasts, termed K1, responsible for Ser649 and Ser286 phosphorylation, and K2, responsible for Ser649 and Ser297 phosphorylation. The Ser286 and Ser297 phosphorylation sites are conserved in all plant branching enzymes and are located at opposite openings of the 8-stranded parallel β-barrel of the active site, which is involved with substrate binding and catalysis. Molecular dynamics simulation analysis indicates that phospho-Ser297 forms a stable salt bridge with Arg665, part of a conserved Cys-containing domain in plant branching enzymes. Ser649 conservation appears confined to the enzyme in cereals and is not universal, and is presumably associated with functions specific to seed storage. The implications of SBEIIb phosphorylation are considered in terms of the role of the enzyme and the importance of starch biosynthesis for yield and biotechnological application. PMID:24550386
Estimation of attitude sensor timetag biases
NASA Technical Reports Server (NTRS)
Sedlak, J.
1995-01-01
This paper presents an extended Kalman filter for estimating attitude sensor timing errors. Spacecraft attitude is determined by finding the mean rotation from a set of reference vectors in inertial space to the corresponding observed vectors in the body frame. Any timing errors in the observations can lead to attitude errors if either the spacecraft is rotating or the reference vectors themselves vary with time. The state vector here consists of the attitude quaternion, timetag biases, and, optionally, gyro drift rate biases. The filter models the timetags as random walk processes: their expectation values propagate as constants and white noise contributes to their covariance. Thus, this filter is applicable to cases where the true timing errors are constant or slowly varying. The observability of the state vector is studied first through an examination of the algebraic observability condition and then through several examples with simulated star tracker timing errors. The examples use both simulated and actual flight data from the Extreme Ultraviolet Explorer (EUVE). The flight data come from times when EUVE had a constant rotation rate, while the simulated data feature large angle attitude maneuvers. The tests include cases with timetag errors on one or two sensors, both constant and time-varying, and with and without gyro bias errors. Due to EUVE's sensor geometry, the observability of the state vector is severely limited when the spacecraft rotation rate is constant. In the absence of attitude maneuvers, the state elements are highly correlated, and the state estimate is unreliable. The estimates are particularly sensitive to filter mistuning in this case. The EUVE geometry, though, is a degenerate case having coplanar sensors and rotation vector. Observability is much improved and the filter performs well when the rate is either varying or noncoplanar with the sensors, as during a slew. Even with bad geometry and constant rates, if gyro biases are independently known, the timetag error for a single sensor can be accurately estimated as long as its boresight is not too close to the spacecraft rotation axis.
Effects of uncertainty and variability on population declines and IUCN Red List classifications.
Rueda-Cediel, Pamela; Anderson, Kurt E; Regan, Tracey J; Regan, Helen M
2018-01-22
The International Union for Conservation of Nature (IUCN) Red List Categories and Criteria is a quantitative framework for classifying species according to extinction risk. Population models may be used to estimate extinction risk or population declines. Uncertainty and variability arise in threat classifications through measurement and process error in empirical data and uncertainty in the models used to estimate extinction risk and population declines. Furthermore, species traits are known to affect extinction risk. We investigated the effects of measurement and process error, model type, population growth rate, and age at first reproduction on the reliability of risk classifications based on projected population declines on IUCN Red List classifications. We used an age-structured population model to simulate true population trajectories with different growth rates, reproductive ages and levels of variation, and subjected them to measurement error. We evaluated the ability of scalar and matrix models parameterized with these simulated time series to accurately capture the IUCN Red List classification generated with true population declines. Under all levels of measurement error tested and low process error, classifications were reasonably accurate; scalar and matrix models yielded roughly the same rate of misclassifications, but the distribution of errors differed; matrix models led to greater overestimation of extinction risk than underestimations; process error tended to contribute to misclassifications to a greater extent than measurement error; and more misclassifications occurred for fast, rather than slow, life histories. These results indicate that classifications of highly threatened taxa (i.e., taxa with low growth rates) under criterion A are more likely to be reliable than for less threatened taxa when assessed with population models. Greater scrutiny needs to be placed on data used to parameterize population models for species with high growth rates, particularly when available evidence indicates a potential transition to higher risk categories. © 2018 Society for Conservation Biology.
Wang, Peng; Bowler, Sarah L; Kantz, Serena F; Mettus, Roberta T; Guo, Yan; McElheny, Christi L; Doi, Yohei
2016-12-01
Treatment options for infections due to carbapenem-resistant Acinetobacter baumannii are extremely limited. Minocycline is a semisynthetic tetracycline derivative with activity against this pathogen. This study compared susceptibility testing methods that are used in clinical microbiology laboratories (Etest, disk diffusion, and Sensititre broth microdilution methods) for testing of minocycline, tigecycline, and doxycycline against 107 carbapenem-resistant A. baumannii clinical isolates. Susceptibility rates determined with the standard broth microdilution method using cation-adjusted Mueller-Hinton (MH) broth were 77.6% for minocycline and 29% for doxycycline, and 92.5% of isolates had tigecycline MICs of ≤2 μg/ml. Using MH agar from BD and Oxoid, susceptibility rates determined with the Etest method were 67.3% and 52.3% for minocycline, 21.5% and 18.7% for doxycycline, and 71% and 29.9% for tigecycline, respectively. With the disk diffusion method using MH agar from BD and Oxoid, susceptibility rates were 82.2% and 72.9% for minocycline and 34.6% and 34.6% for doxycycline, respectively, and rates of MICs of ≤2 μg/ml were 46.7% and 23.4% for tigecycline. In comparison with the standard broth microdilution results, very major rates were low (∼2.8%) for all three drugs across the methods, but major error rates were higher (∼5.6%), especially with the Etest method. For minocycline, minor error rates ranged from 14% to 37.4%. For tigecycline, minor error rates ranged from 6.5% to 69.2%. The majority of minor errors were due to susceptible results being reported as intermediate. For minocycline susceptibility testing of carbapenem-resistant A. baumannii strains, very major errors are rare, but major and minor errors overcalling strains as intermediate or resistant occur frequently with susceptibility testing methods that are feasible in clinical laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Six Common Mistakes in Conservation Priority Setting
Game, Edward T; Kareiva, Peter; Possingham, Hugh P
2013-01-01
Abstract A vast number of prioritization schemes have been developed to help conservation navigate tough decisions about the allocation of finite resources. However, the application of quantitative approaches to setting priorities in conservation frequently includes mistakes that can undermine their authors’ intention to be more rigorous and scientific in the way priorities are established and resources allocated. Drawing on well-established principles of decision science, we highlight 6 mistakes commonly associated with setting priorities for conservation: not acknowledging conservation plans are prioritizations; trying to solve an ill-defined problem; not prioritizing actions; arbitrariness; hidden value judgments; and not acknowledging risk of failure. We explain these mistakes and offer a path to help conservation planners avoid making the same mistakes in future prioritizations. Seis Errores Comunes en la Definición de Prioridades de Conservación Resumen Se ha desarrollado un vasto número de esquemas de priorización para ayudar a que la conservación navegue entre decisiones difíciles en cuanto a la asignación de recursos finitos. Sin embargo, la aplicación de métodos cuantitativos para la definición de prioridades en la conservación frecuentemente incluye errores que pueden socavar la intención de sus autores de ser más rigurosos y científicos en la manera en que se establecen las prioridades y se asignan los recursos. Con base en los bien establecidos principios de la ciencia de la decisión, resaltamos seis errores comúnmente asociados con la definición de prioridades para la conservación: no reconocer que los planes de conservación son priorizaciones; tratar de resolver un problema mal definido; no priorizar acciones; arbitrariedad; juicios de valor ocultos y no reconocer el riesgo de fracasar. Explicamos estos errores y ofrecemos un camino para que planificadores de la conservación no cometan los mismos errores en priorizaciones futuras. PMID:23565990
Liu, Weitang; Bai, Shuang; Jia, Sisi; Guo, Wenlei; Zhang, Lele; Li, Wei; Wang, Jinxin
2017-10-01
Herbicide target-site resistance mutations may cause pleiotropic effects on plant ecology and physiology. The effect of several known (Pro197Ser, Pro197Leu Pro197Ala, and Pro197Glu) target-site resistance mutations of the ALS gene on both ALS functionality and plant vegetative growth of weed Myosoton aquaticum L. (water chickweed) have been investigated here. The enzyme kinetics of ALS from four purified water chickweed populations that each homozygous for the specific target-site resistance-endowing mutations were characterized and the effect of these mutations on plant growth was assessed via relative growth rate (RGR) analysis. Plants homozygous for Pro197Ser and Pro197Leu exhibited higher extractable ALS activity than susceptible (S) plants, while all ALS mutations with no negative change in ALS kinetics. The Pro197Leu mutation increased ALS sensitivity to isoleucine and valine, and Pro197Glu mutation slightly increased ALS sensitivity to isoleucine. RGR results indicated that none of these ALS resistance mutations impose negative pleiotropic effects on relative growth rate. However, resistant (R) seeds had a lowed germination rate than S seeds. This study provides baseline information on ALS functionality and plant growth characteristics associated with ALS inhibitor resistance-endowing mutations in water chickweed. Copyright © 2017. Published by Elsevier Inc.
Chua, S S; Tea, M H; Rahman, M H A
2009-04-01
Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.
NASA Astrophysics Data System (ADS)
Yang, Yong; Li, Zhi-Yuan; Yamaguchi, Kohei; Tanemura, Masaki; Huang, Zhengren; Jiang, Dongliang; Chen, Yuhui; Zhou, Fei; Nogami, Masayuki
2012-03-01
Novel surface-enhanced Raman scattering (SERS) substrates with high SERS-activity are ideal for novel SERS sensors, detectors to detect illicitly sold narcotics and explosives. The key to the wider application of SERS technique is to develop plasmon resonant structure with novel geometries to enhance Raman signals and to control the periodic ordering of these structures over a large area to obtain reproducible Raman enhancement. In this work, a simple Ar+-ion sputtering route has been developed to fabricate silver nanoneedles arrays on silicon substrates for SERS-active substrates to detect trace-level illicitly sold narcotics. These silver nanoneedles possess a very sharp apex with an apex diameter of 15 nm and an apex angle of 20°. The SERS enhancement factor of greater than 1010 was reproducibly achieved by the well-aligned nanoneedles arrays. Furthermore, ketamine hydrochloride molecules, one kind of illicitly sold narcotics, can be detected down to 27 ppb by using our SERS substrate within 3 s, indicating the sensitivity of our SERS substrates for trace amounts of narcotics and that SERS technology can become an important analytical technique in forensic laboratories because it can provide a rapid and nondestructive method for trace detection.
DNA/RNA transverse current sequencing: intrinsic structural noise from neighboring bases
Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.
2015-01-01
Nanopore DNA sequencing via transverse current has emerged as a promising candidate for third-generation sequencing technology. It produces long read lengths which could alleviate problems with assembly errors inherent in current technologies. However, the high error rates of nanopore sequencing have to be addressed. A very important source of the error is the intrinsic noise in the current arising from carrier dispersion along the chain of the molecule, i.e., from the influence of neighboring bases. In this work we perform calculations of the transverse current within an effective multi-orbital tight-binding model derived from first-principles calculations of the DNA/RNA molecules, to study the effect of this structural noise on the error rates in DNA/RNA sequencing via transverse current in nanopores. We demonstrate that a statistical technique, utilizing not only the currents through the nucleotides but also the correlations in the currents, can in principle reduce the error rate below any desired precision. PMID:26150827
Toward a new culture in verified quantum operations
NASA Astrophysics Data System (ADS)
Flammia, Steve
Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.
D-Serine Metabolism and Its Importance in Development of Dictyostelium discoideum
Ito, Tomokazu; Hamauchi, Natsuki; Hagi, Taisuke; Morohashi, Naoya; Hemmi, Hisashi; Sato, Yukie G.; Saito, Tamao; Yoshimura, Tohru
2018-01-01
In mammals, D-Ser is synthesized by serine racemase (SR) and degraded by D-amino acid oxidase (DAO). D-Ser acts as an endogenous ligand for N-methyl-D-aspartate (NMDA)- and δ2 glutamate receptors, and is involved in brain functions such as learning and memory. Although SR homologs are highly conserved in eukaryotes, little is known about the significance of D-Ser in non-mammals. In contrast to mammals, the slime mold Dictyostelium discoideum genome encodes SR, DAO, and additionally D-Ser specific degradation enzyme D-Ser dehydratase (DSD), but not NMDA- and δ2 glutamate receptors. Here, we studied the significances of D-Ser and DSD in D. discoideum. Enzymatic assays demonstrated that DSD is 460- and 1,700-fold more active than DAO and SR, respectively, in degrading D-Ser. Moreover, in dsd-null cells D-Ser degradation activity is completely abolished. In fact, while in wild-type D. discoideum intracellular D-Ser levels were considerably low, dsd-null cells accumulated D-Ser. These results indicated that DSD but not DAO is the primary enzyme responsible for D-Ser decomposition in D. discoideum. We found that dsd-null cells exhibit delay in development and arrest at the early culmination stage. The efficiency of spore formation was considerably reduced in the mutant cells. These phenotypes were further pronounced by exogenous D-Ser but rescued by plasmid-borne expression of dsd. qRT-PCR analysis demonstrated that mRNA expression of key genes in the cAMP signaling relay is perturbed in the dsd knockout. Our data indicate novel roles for D-Ser and/or DSD in the regulation of cAMP signaling in the development processes of D. discoideum. PMID:29740415
Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.
ERIC Educational Resources Information Center
Reddon, John R.; And Others
1985-01-01
Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)