NASA Technical Reports Server (NTRS)
Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.
2006-01-01
Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Common mode error in Antarctic GPS coordinate time series on its effect on bedrock-uplift estimates
NASA Astrophysics Data System (ADS)
Liu, Bin; King, Matt; Dai, Wujiao
2018-05-01
Spatially-correlated common mode error always exists in regional, or-larger, GPS networks. We applied independent component analysis (ICA) to GPS vertical coordinate time series in Antarctica from 2010 to 2014 and made a comparison with the principal component analysis (PCA). Using PCA/ICA, the time series can be decomposed into a set of temporal components and their spatial responses. We assume the components with common spatial responses are common mode error (CME). An average reduction of ˜40% about the RMS values was achieved in both PCA and ICA filtering. However, the common mode components obtained from the two approaches have different spatial and temporal features. ICA time series present interesting correlations with modeled atmospheric and non-tidal ocean loading displacements. A white noise (WN) plus power law noise (PL) model was adopted in the GPS velocity estimation using maximum likelihood estimation (MLE) analysis, with ˜55% reduction of the velocity uncertainties after filtering using ICA. Meanwhile, spatiotemporal filtering reduces the amplitude of PL and periodic terms in the GPS time series. Finally, we compare the GPS uplift velocities, after correction for elastic effects, with recent models of glacial isostatic adjustment (GIA). The agreements of the GPS observed velocities and four GIA models are generally improved after the spatiotemporal filtering, with a mean reduction of ˜0.9 mm/yr of the WRMS values, possibly allowing for more confident separation of various GIA model predictions.
Interferometer for Measuring Displacement to Within 20 pm
NASA Technical Reports Server (NTRS)
Zhao, Feng
2003-01-01
An optical heterodyne interferometer that can be used to measure linear displacements with an error <=20 pm has been developed. The remarkable accuracy of this interferometer is achieved through a design that includes (1) a wavefront split that reduces (relative to amplitude splits used in other interferometers) self interference and (2) a common-optical-path configuration that affords common-mode cancellation of the interference effects of thermal-expansion changes in optical-path lengths. The most popular method of displacement- measuring interferometry involves two beams, the polarizations of which are meant to be kept orthogonal upstream of the final interference location, where the difference between the phases of the two beams is measured. Polarization leakages (deviations from the desired perfect orthogonality) contaminate the phase measurement with periodic nonlinear errors. In commercial interferometers, these phase-measurement errors result in displacement errors in the approximate range of 1 to 10 nm. Moreover, because prior interferometers lack compensation for thermal-expansion changes in optical-path lengths, they are subject to additional displacement errors characterized by a temperature sensitivity of about 100 nm/K. Because the present interferometer does not utilize polarization in the separation and combination of the two interfering beams and because of the common-mode cancellation of thermal-expansion effects, the periodic nonlinear errors and the sensitivity to temperature changes are much smaller than in other interferometers
An experimental system for the study of active vibration control - Development and modeling
NASA Astrophysics Data System (ADS)
Batta, George R.; Chen, Anning
A modular rotational vibration system designed to facilitate the study of active control of vibrating systems is discussed. The model error associated with four common types of identification problems has been studied. The general multiplicative uncertainty shape for a vibration system is small in low frequencies, large at high frequencies. The frequency-domain error function has sharp peaks near the frequency of each mode. The inability to identify a high-frequency mode causes an increase of uncertainties at all frequencies. Missing a low-frequency mode causes the uncertainties to be much larger at all frequencies than missing a high-frequency mode. Hysteresis causes a small increase of uncertainty at low frequencies, but its overall effect is relatively small.
49 CFR Appendix C to Part 236 - Safety Assurance Criteria and Processes
Code of Federal Regulations, 2010 CFR
2010-10-01
... system (all its elements including hardware and software) must be designed to assure safe operation with... unsafe errors in the software due to human error in the software specification, design, or coding phases... (hardware or software, or both) are used in combination to ensure safety. If a common mode failure exists...
Effects of Heavy Ion Exposure on Nanocrystal Nonvolatile Memory
NASA Technical Reports Server (NTRS)
Oldham, Timothy R.; Suhail, Mohammed; Kuhn, Peter; Prinz, Erwin; Kim, Hak; LaBel, Kenneth A.
2004-01-01
We have irradiated engineering samples of Freescale 4M nonvolatile memories with heavy ions. They use Silicon nanocrystals as the storage element, rather than the more common floating gate. The irradiations were performed using the Texas A&M University cyclotron Single Event Effects Test Facility. The chips were tested in the static mode, and in the dynamic read mode, dynamic write (program) mode, and dynamic erase mode. All the errors observed appeared to be due to single, isolated bits, even in the program and erase modes. These errors appeared to be related to the micro-dose mechanism. All the errors corresponded to the loss of electrons from a programmed cell. The underlying physical mechanisms will be discussed in more detail later. There were no errors, which could be attributed to malfunctions of the control circuits. At the highest LET used in the test (85 MeV/mg/sq cm), however, there appeared to be a failure due to gate rupture. Failure analysis is being conducted to confirm this conclusion. There was no unambiguous evidence of latchup under any test conditions. Generally, the results on the nanocrystal technology compare favorably with results on currently available commercial floating gate technology, indicating that the technology is promising for future space applications, both civilian and military.
Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila
2016-01-01
Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of "failure modes and effects analysis" (FMEA). In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members' decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors.
Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila
2016-01-01
Background: Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of “failure modes and effects analysis” (FMEA). Materials and Methods: In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members’ decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Results: Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. Conclusions: The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors. PMID:28194208
Software fault-tolerance by design diversity DEDIX: A tool for experiments
NASA Technical Reports Server (NTRS)
Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.
1986-01-01
The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.
How Prediction Errors Shape Perception, Attention, and Motivation
den Ouden, Hanneke E. M.; Kok, Peter; de Lange, Floris P.
2012-01-01
Prediction errors (PE) are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees and consider the commonalities and differences of reported PE signals in light of recent suggestions that the computation of PE forms a fundamental mode of brain function. We discuss where different types of PE are encoded, how they are generated, and the different functional roles they fulfill. We suggest that while encoding of PE is a common computation across brain regions, the content and function of these error signals can be very different and are determined by the afferent and efferent connections within the neural circuitry in which they arise. PMID:23248610
Gated integrator with signal baseline subtraction
Wang, X.
1996-12-17
An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window. 5 figs.
Gated integrator with signal baseline subtraction
Wang, Xucheng
1996-01-01
An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window.
Common-Path Interferometric Wavefront Sensing for Space Telescopes
NASA Technical Reports Server (NTRS)
Wallace, James Kent
2011-01-01
This paper presents an optical configuration for a common-path phase-shifting interferometric wavefront sensor.1 2 This sensor has a host of attractive features which make it well suited for space-based adaptive optics. First, it is strictly reflective and therefore operates broadband, second it is common mode and therefore does not suffer from systematic errors (like vibration) that are typical in other interferometers, third it is a phase-shifting interferometer and therefore benefits from both the sensitivity of interferometric sensors as well as the noise rejection afforded by synchronous detection. Unlike the Shack-Hartman wavefront sensor, it has nearly uniform sensitivity to all pupil modes. Optical configuration, theory and simulations for such a system will be discussed along with predicted performance.
NASA Astrophysics Data System (ADS)
Chen, Jingliang; Su, Jun; Kochan, Orest; Levkiv, Mariana
2018-04-01
The simplified metrological software test (MST) for modeling the method of determining the thermocouple (TC) error in situ during operation is considered in the paper. The interaction between the proposed MST and a temperature measuring system is also reflected in order to study the error of determining the TC error in situ during operation. The modelling studies of the random error influence of the temperature measuring system, as well as interference magnitude (both the common and normal mode noises) on the error of determining the TC error in situ during operation using the proposed MST, have been carried out. The noise and interference of the order of 5-6 μV cause the error of about 0.2-0.3°C. It is shown that high noise immunity is essential for accurate temperature measurements using TCs.
The cerebellum for jocks and nerds alike.
Popa, Laurentiu S; Hewitt, Angela L; Ebner, Timothy J
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains.
The cerebellum for jocks and nerds alike
Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains. PMID:24987338
Lago, Paola; Bizzarri, Giancarlo; Scalzotto, Francesca; Parpaiola, Antonella; Amigoni, Angela; Putoto, Giovanni; Perilongo, Giorgio
2012-01-01
Objective Administering medication to hospitalised infants and children is a complex process at high risk of error. Failure mode and effect analysis (FMEA) is a proactive tool used to analyse risks, identify failures before they happen and prioritise remedial measures. To examine the hazards associated with the process of drug delivery to children, we performed a proactive risk-assessment analysis. Design and setting Five multidisciplinary teams, representing different divisions of the paediatric department at Padua University Hospital, were trained to analyse the drug-delivery process, to identify possible causes of failures and their potential effects, to calculate a risk priority number (RPN) for each failure and plan changes in practices. Primary outcome To identify higher-priority potential failure modes as defined by RPNs and planning changes in clinical practice to reduce the risk of patients harm and improve safety in the process of medication use in children. Results In all, 37 higher-priority potential failure modes and 71 associated causes and effects were identified. The highest RPNs related (>48) mainly to errors in calculating drug doses and concentrations. Many of these failure modes were found in all the five units, suggesting the presence of common targets for improvement, particularly in enhancing the safety of prescription and preparation of endovenous drugs. The introductions of new activities in the revised process of administering drugs allowed reducing the high-risk failure modes of 60%. Conclusions FMEA is an effective proactive risk-assessment tool useful to aid multidisciplinary groups in understanding a process care and identifying errors that may occur, prioritising remedial interventions and possibly enhancing the safety of drug delivery in children. PMID:23253870
Lago, Paola; Bizzarri, Giancarlo; Scalzotto, Francesca; Parpaiola, Antonella; Amigoni, Angela; Putoto, Giovanni; Perilongo, Giorgio
2012-01-01
Administering medication to hospitalised infants and children is a complex process at high risk of error. Failure mode and effect analysis (FMEA) is a proactive tool used to analyse risks, identify failures before they happen and prioritise remedial measures. To examine the hazards associated with the process of drug delivery to children, we performed a proactive risk-assessment analysis. Five multidisciplinary teams, representing different divisions of the paediatric department at Padua University Hospital, were trained to analyse the drug-delivery process, to identify possible causes of failures and their potential effects, to calculate a risk priority number (RPN) for each failure and plan changes in practices. To identify higher-priority potential failure modes as defined by RPNs and planning changes in clinical practice to reduce the risk of patients harm and improve safety in the process of medication use in children. In all, 37 higher-priority potential failure modes and 71 associated causes and effects were identified. The highest RPNs related (>48) mainly to errors in calculating drug doses and concentrations. Many of these failure modes were found in all the five units, suggesting the presence of common targets for improvement, particularly in enhancing the safety of prescription and preparation of endovenous drugs. The introductions of new activities in the revised process of administering drugs allowed reducing the high-risk failure modes of 60%. FMEA is an effective proactive risk-assessment tool useful to aid multidisciplinary groups in understanding a process care and identifying errors that may occur, prioritising remedial interventions and possibly enhancing the safety of drug delivery in children.
Locked-mode avoidance and recovery without external momentum input
NASA Astrophysics Data System (ADS)
Delgado-Aparicio, L.; Gates, D. A.; Wolfe, S.; Rice, J. E.; Gao, C.; Wukitch, S.; Greenwald, M.; Hughes, J.; Marmar, E.; Scott, S.
2014-10-01
Error-field-induced locked-modes (LMs) have been studied in C-Mod at ITER toroidal fields without NBI fueling and momentum input. The use of ICRH heating in synch with the error-field ramp-up resulted in a successful delay of the mode-onset when PICRH > 1 MW and a transition into H-mode when PICRH > 2 MW. The recovery experiments consisted in applying ICRH power during the LM non-rotating phase successfully unlocking the core plasma. The ``induced'' toroidal rotation was in the counter-current direction, restoring the direction and magnitude of the toroidal flow before the LM formation, but contrary to the expected Rice-scaling in the co-current direction. However, the LM occurs near the LOC/SOC transition where rotation reversals are commonly observed. Once PICRH is turned off, the core plasma ``locks'' at later times depending on the evolution of ne and Vt. This work was performed under US DoE contracts including DE-FC02-99ER54512 and others at MIT and DE-AC02-09CH11466 at PPPL.
NASA Astrophysics Data System (ADS)
Gruszczynska, Marta; Rosat, Severine; Klos, Anna; Gruszczynski, Maciej; Bogusz, Janusz
2018-03-01
We described a spatio-temporal analysis of environmental loading models: atmospheric, continental hydrology, and non-tidal ocean changes, based on multichannel singular spectrum analysis (MSSA). We extracted the common annual signal for 16 different sections related to climate zones: equatorial, arid, warm, snow, polar and continents. We used the loading models estimated for a set of 229 ITRF2014 (International Terrestrial Reference Frame) International GNSS Service (IGS) stations and discussed the amount of variance explained by individual modes, proving that the common annual signal accounts for 16, 24 and 68% of the total variance of non-tidal ocean, atmospheric and hydrological loading models, respectively. Having removed the common environmental MSSA seasonal curve from the corresponding GPS position time series, we found that the residual station-specific annual curve modelled with the least-squares estimation has the amplitude of maximum 2 mm. This means that the environmental loading models underestimate the seasonalities observed by the GPS system. The remaining signal present in the seasonal frequency band arises from the systematic errors which are not of common environmental or geophysical origin. Using common mode error (CME) estimates, we showed that the direct removal of environmental loading models from the GPS series causes an artificial loss in the CME power spectra between 10 and 80 cycles per year. When environmental effect is removed from GPS series with MSSA curves, no influence on the character of spectra of CME estimates was noticed.
NASA Astrophysics Data System (ADS)
Gruszczynska, Marta; Rosat, Severine; Klos, Anna; Gruszczynski, Maciej; Bogusz, Janusz
2018-05-01
We described a spatio-temporal analysis of environmental loading models: atmospheric, continental hydrology, and non-tidal ocean changes, based on multichannel singular spectrum analysis (MSSA). We extracted the common annual signal for 16 different sections related to climate zones: equatorial, arid, warm, snow, polar and continents. We used the loading models estimated for a set of 229 ITRF2014 (International Terrestrial Reference Frame) International GNSS Service (IGS) stations and discussed the amount of variance explained by individual modes, proving that the common annual signal accounts for 16, 24 and 68% of the total variance of non-tidal ocean, atmospheric and hydrological loading models, respectively. Having removed the common environmental MSSA seasonal curve from the corresponding GPS position time series, we found that the residual station-specific annual curve modelled with the least-squares estimation has the amplitude of maximum 2 mm. This means that the environmental loading models underestimate the seasonalities observed by the GPS system. The remaining signal present in the seasonal frequency band arises from the systematic errors which are not of common environmental or geophysical origin. Using common mode error (CME) estimates, we showed that the direct removal of environmental loading models from the GPS series causes an artificial loss in the CME power spectra between 10 and 80 cycles per year. When environmental effect is removed from GPS series with MSSA curves, no influence on the character of spectra of CME estimates was noticed.
Estimation of perspective errors in 2D2C-PIV measurements for 3D concentrated vortices
NASA Astrophysics Data System (ADS)
Ma, Bao-Feng; Jiang, Hong-Gang
2018-06-01
Two-dimensional planar PIV (2D2C) is still extensively employed in flow measurement owing to its availability and reliability, although more advanced PIVs have been developed. It has long been recognized that there exist perspective errors in velocity fields when employing the 2D2C PIV to measure three-dimensional (3D) flows, the magnitude of which depends on out-of-plane velocity and geometric layouts of the PIV. For a variety of vortex flows, however, the results are commonly represented by vorticity fields, instead of velocity fields. The present study indicates that the perspective error in vorticity fields relies on gradients of the out-of-plane velocity along a measurement plane, instead of the out-of-plane velocity itself. More importantly, an estimation approach to the perspective error in 3D vortex measurements was proposed based on a theoretical vortex model and an analysis on physical characteristics of the vortices, in which the gradient of out-of-plane velocity is uniquely determined by the ratio of the maximum out-of-plane velocity to maximum swirling velocity of the vortex; meanwhile, the ratio has upper limits for naturally formed vortices. Therefore, if the ratio is imposed with the upper limits, the perspective error will only rely on the geometric layouts of PIV that are known in practical measurements. Using this approach, the upper limits of perspective errors of a concentrated vortex can be estimated for vorticity and other characteristic quantities of the vortex. In addition, the study indicates that the perspective errors in vortex location, vortex strength, and vortex radius can be all zero for axisymmetric vortices if they are calculated by proper methods. The dynamic mode decomposition on an oscillatory vortex indicates that the perspective errors of each DMD mode are also only dependent on the gradient of out-of-plane velocity if the modes are represented by vorticity.
Measurements of the toroidal torque balance of error field penetration locked modes
Shiraki, Daisuke; Paz-Soldan, Carlos; Hanson, Jeremy M.; ...
2015-01-05
Here, detailed measurements from the DIII-D tokamak of the toroidal dynamics of error field penetration locked modes under the influence of slowly evolving external fields, enable study of the toroidal torques on the mode, including interaction with the intrinsic error field. The error field in these low density Ohmic discharges is well known based on the mode penetration threshold, allowing resonant and non-resonant torque effects to be distinguished. These m/n = 2/1 locked modes are found to be well described by a toroidal torque balance between the resonant interaction with n = 1 error fields, and a viscous torque inmore » the electron diamagnetic drift direction which is observed to scale as the square of the perturbed field due to the island. Fitting to this empirical torque balance allows a time-resolved measurement of the intrinsic error field of the device, providing evidence for a time-dependent error field in DIII-D due to ramping of the Ohmic coil current.« less
Coal gasification system with a modulated on/off control system
Fasching, George E.
1984-01-01
A modulated control system is provided for improving regulation of the bed level in a fixed-bed coal gasifier into which coal is fed from a rotary coal feeder. A nuclear bed level gauge using a cobalt source and an ion chamber detector is used to detect the coal bed level in the gasifier. The detector signal is compared to a bed level set point signal in a primary controller which operates in proportional/integral modes to produce an error signal. The error signal is modulated by the injection of a triangular wave signal of a frequency of about 0.0004 Hz and an amplitude of about 80% of the primary deadband. The modulated error signal is fed to a triple-deadband secondary controller which jogs the coal feeder speed up or down by on/off control of a feeder speed change driver such that the gasifier bed level is driven toward the set point while preventing excessive cycling (oscillation) common in on/off mode automatic controllers of this type. Regulation of the bed level is achieved without excessive feeder speed control jogging.
Measurement Error Calibration in Mixed-Mode Sample Surveys
ERIC Educational Resources Information Center
Buelens, Bart; van den Brakel, Jan A.
2015-01-01
Mixed-mode surveys are known to be susceptible to mode-dependent selection and measurement effects, collectively referred to as mode effects. The use of different data collection modes within the same survey may reduce selectivity of the overall response but is characterized by measurement errors differing across modes. Inference in sample surveys…
Safety Strategies in an Academic Radiation Oncology Department and Recommendations for Action
Terezakis, Stephanie A.; Pronovost, Peter; Harris, Kendra; DeWeese, Theodore; Ford, Eric
2013-01-01
Background Safety initiatives in the United States continue to work on providing guidance as to how the average practitioner might make patients safer in the face of the complex process by which radiation therapy (RT), an essential treatment used in the management of many patients with cancer, is prepared and delivered. Quality control measures can uncover certain specific errors such as machine dose mis-calibration or misalignments of the patient in the radiation treatment beam. However, they are less effective at uncovering less common errors that can occur anywhere along the treatment planning and delivery process, and even when the process is functioning as intended, errors still occur. Prioritizing Risks and Implementing Risk-Reduction Strategies Activities undertaken at the radiation oncology department at the Johns Hopkins Hospital (Baltimore) include Failure Mode and Effects Analysis (FMEA), risk-reduction interventions, and voluntary error and near-miss reporting systems. A visual process map portrayed 269 RT steps occurring among four subprocesses—including consult, simulation, treatment planning, and treatment delivery. Two FMEAs revealed 127 and 159 possible failure modes, respectively. Risk-reduction interventions for 15 “top-ranked” failure modes were implemented. Since the error and near-miss reporting system’s implementation in the department in 2007, 253 events have been logged. However, the system may be insufficient for radiation oncology, for which a greater level of practice-specific information is required to fully understand each event. Conclusions The “basic science” of radiation treatment has received considerable support and attention in developing novel therapies to benefit patients. The time has come to apply the same focus and resources to ensuring that patients safely receive the maximal benefits possible. PMID:21819027
Characterization of identification errors and uses in localization of poor modal correlation
NASA Astrophysics Data System (ADS)
Martin, Guillaume; Balmes, Etienne; Chancelier, Thierry
2017-05-01
While modal identification is a mature subject, very few studies address the characterization of errors associated with components of a mode shape. This is particularly important in test/analysis correlation procedures, where the Modal Assurance Criterion is used to pair modes and to localize at which sensors discrepancies occur. Poor correlation is usually attributed to modeling errors, but clearly identification errors also occur. In particular with 3D Scanning Laser Doppler Vibrometer measurement, many transfer functions are measured. As a result individual validation of each measurement cannot be performed manually in a reasonable time frame and a notable fraction of measurements is expected to be fairly noisy leading to poor identification of the associated mode shape components. The paper first addresses measurements and introduces multiple criteria. The error measures the difference between test and synthesized transfer functions around each resonance and can be used to localize poorly identified modal components. For intermediate error values, diagnostic of the origin of the error is needed. The level evaluates the transfer function amplitude in the vicinity of a given mode and can be used to eliminate sensors with low responses. A Noise Over Signal indicator, product of error and level, is then shown to be relevant to detect poorly excited modes and errors due to modal property shifts between test batches. Finally, a contribution is introduced to evaluate the visibility of a mode in each transfer. Using tests on a drum brake component, these indicators are shown to provide relevant insight into the quality of measurements. In a second part, test/analysis correlation is addressed with a focus on the localization of sources of poor mode shape correlation. The MACCo algorithm, which sorts sensors by the impact of their removal on a MAC computation, is shown to be particularly relevant. Combined with the error it avoids keeping erroneous modal components. Applied after removal of poor modal components, it provides spatial maps of poor correlation, which help localizing mode shape correlation errors and thus prepare the selection of model changes in updating procedures.
Cheng, Ching-Min; Hwang, Sheue-Ling
2015-03-01
This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Metering error quantification under voltage and current waveform distortion
NASA Astrophysics Data System (ADS)
Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran
2017-09-01
With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.
Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D
Haye, R. J. La; Paz-Soldan, C.; Strait, E. J.
2015-01-23
DIII-D experiments show that fully penetrated resonant n=1 error field locked modes in Ohmic plasmas with safety factor q 95≳3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n=2/1) static error fields are shielded in Ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption.
A novel body frame based approach to aerospacecraft attitude tracking.
Ma, Carlos; Chen, Michael Z Q; Lam, James; Cheung, Kie Chung
2017-09-01
In the common practice of designing an attitude tracker for an aerospacecraft, one transforms the Newton-Euler rotation equations to obtain the dynamic equations of some chosen inertial frame based attitude metrics, such as Euler angles and unit quaternions. A Lyapunov approach is then used to design a controller which ensures asymptotic convergence of the attitude to the desired orientation. Although this design methodology is pretty standard, it usually involves singularity-prone coordinate transformations which complicates the analysis process and controller design. A new, singularity free error feedback method is proposed in the paper to provide simple and intuitive stability analysis and controller synthesis. This new body frame based method utilizes the concept of Euleraxis and angles to generate the smallest error angles from a body frame perspective, without coordinate transformations. Global tracking convergence is illustrated with the use of a feedback linearizing PD tracker, a sliding mode controller, and a model reference adaptive controller. Experimental results are also obtained on a quadrotor platform with unknown system parameters and disturbances, using a boundary layer approximated sliding mode controller, a PIDD controller, and a unit sliding mode controller. Significant tracking quality is attained. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Determination of elastomeric foam parameters for simulations of complex loading.
Petre, M T; Erdemir, A; Cavanagh, P R
2006-08-01
Finite element (FE) analysis has shown promise for the evaluation of elastomeric foam personal protection devices. Although appropriate representation of foam materials is necessary in order to obtain realistic simulation results, material definitions used in the literature vary widely and often fail to account for the multi-mode loading experienced by these devices. This study aims to provide a library of elastomeric foam material parameters that can be used in FE simulations of complex loading scenarios. Twelve foam materials used in footwear were tested in uni-axial compression, simple shear and volumetric compression. For each material, parameters for a common compressible hyperelastic material model used in FE analysis were determined using: (a) compression; (b) compression and shear data; and (c) data from all three tests. Material parameters and Drucker stability limits for the best fits are provided with their associated errors. The material model was able to reproduce deformation modes for which data was provided during parameter determination but was unable to predict behavior in other deformation modes. Simulation results were found to be highly dependent on the extent of the test data used to determine the parameters in the material definition. This finding calls into question the many published results of simulations of complex loading that use foam material parameters obtained from a single mode of testing. The library of foam parameters developed here presents associated errors in three deformation modes that should provide for a more informed selection of material parameters.
A global perspective of the limits of prediction skill based on the ECMWF ensemble
NASA Astrophysics Data System (ADS)
Zagar, Nedjeljka
2016-04-01
In this talk presents a new model of the global forecast error growth applied to the forecast errors simulated by the ensemble prediction system (ENS) of the ECMWF. The proxy for forecast errors is the total spread of the ECMWF operational ensemble forecasts obtained by the decomposition of the wind and geopotential fields in the normal-mode functions. In this way, the ensemble spread can be quantified separately for the balanced and inertio-gravity (IG) modes for every forecast range. Ensemble reliability is defined for the balanced and IG modes comparing the ensemble spread with the control analysis in each scale. The results show that initial uncertainties in the ECMWF ENS are largest in the tropical large-scale modes and their spatial distribution is similar to the distribution of the short-range forecast errors. Initially the ensemble spread grows most in the smallest scales and in the synoptic range of the IG modes but the overall growth is dominated by the increase of spread in balanced modes in synoptic and planetary scales in the midlatitudes. During the forecasts, the distribution of spread in the balanced and IG modes grows towards the climatological spread distribution characteristic of the analyses. The ENS system is found to be somewhat under-dispersive which is associated with the lack of tropical variability, primarily the Kelvin waves. The new model of the forecast error growth has three fitting parameters to parameterize the initial fast growth and a more slow exponential error growth later on. The asymptotic values of forecast errors are independent of the exponential growth rate. It is found that the asymptotic values of the errors due to unbalanced dynamics are around 10 days while the balanced and total errors saturate in 3 to 4 weeks. Reference: Žagar, N., R. Buizza, and J. Tribbia, 2015: A three-dimensional multivariate modal analysis of atmospheric predictability with application to the ECMWF ensemble. J. Atmos. Sci., 72, 4423-4444.
Magnetic Control of Locked Modes in Present Devices and ITER
NASA Astrophysics Data System (ADS)
Volpe, F. A.; Sabbagh, S.; Sweeney, R.; Hender, T.; Kirk, A.; La Haye, R. J.; Strait, E. J.; Ding, Y. H.; Rao, B.; Fietz, S.; Maraschek, M.; Frassinetti, L.; in, Y.; Jeon, Y.; Sakakihara, S.
2014-10-01
The toroidal phase of non-rotating (``locked'') neoclassical tearing modes was controlled in several devices by means of applied magnetic perturbations. Evidence is presented from various tokamaks (ASDEX Upgrade, DIII-D, JET, J-TEXT, KSTAR), spherical tori (MAST, NSTX) and a reversed field pinch (EXTRAP-T2R). Furthermore, the phase of interchange modes was controlled in the LHD helical device. These results share a common interpretation in terms of torques acting on the mode. Based on this interpretation, it is predicted that control-coil currents will be sufficient to control the phase of locking in ITER. This will be possible both with the internal coils and with the external error-field-correction coils, and might have promising consequences for disruption avoidance (by aiding the electron cyclotron current drive stabilization of locked modes), as well as for spatially distributing heat loads during disruptions. This work was supported in part by the US Department of Energy under DE-SC0008520, DE-FC-02-04ER54698 and DE-AC02-09CH11466.
Risk analysis by FMEA as an element of analytical validation.
van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M
2009-12-05
We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.
Sliding mode output feedback control based on tracking error observer with disturbance estimator.
Xiao, Lingfei; Zhu, Yue
2014-07-01
For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
A data-driven approach for denoising GNSS position time series
NASA Astrophysics Data System (ADS)
Li, Yanyan; Xu, Caijun; Yi, Lei; Fang, Rongxin
2017-12-01
Global navigation satellite system (GNSS) datasets suffer from common mode error (CME) and other unmodeled errors. To decrease the noise level in GNSS positioning, we propose a new data-driven adaptive multiscale denoising method in this paper. Both synthetic and real-world long-term GNSS datasets were employed to assess the performance of the proposed method, and its results were compared with those of stacking filtering, principal component analysis (PCA) and the recently developed multiscale multiway PCA. It is found that the proposed method can significantly eliminate the high-frequency white noise and remove the low-frequency CME. Furthermore, the proposed method is more precise for denoising GNSS signals than the other denoising methods. For example, in the real-world example, our method reduces the mean standard deviation of the north, east and vertical components from 1.54 to 0.26, 1.64 to 0.21 and 4.80 to 0.72 mm, respectively. Noise analysis indicates that for the original signals, a combination of power-law plus white noise model can be identified as the best noise model. For the filtered time series using our method, the generalized Gauss-Markov model is the best noise model with the spectral indices close to - 3, indicating that flicker walk noise can be identified. Moreover, the common mode error in the unfiltered time series is significantly reduced by the proposed method. After filtering with our method, a combination of power-law plus white noise model is the best noise model for the CMEs in the study region.
NASA Technical Reports Server (NTRS)
Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)
2001-01-01
A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.
An FPGA Architecture for Extracting Real-Time Zernike Coefficients from Measured Phase Gradients
NASA Astrophysics Data System (ADS)
Moser, Steven; Lee, Peter; Podoleanu, Adrian
2015-04-01
Zernike modes are commonly used in adaptive optics systems to represent optical wavefronts. However, real-time calculation of Zernike modes is time consuming due to two factors: the large factorial components in the radial polynomials used to define them and the large inverse matrix calculation needed for the linear fit. This paper presents an efficient parallel method for calculating Zernike coefficients from phase gradients produced by a Shack-Hartman sensor and its real-time implementation using an FPGA by pre-calculation and storage of subsections of the large inverse matrix. The architecture exploits symmetries within the Zernike modes to achieve a significant reduction in memory requirements and a speed-up of 2.9 when compared to published results utilising a 2D-FFT method for a grid size of 8×8. Analysis of processor element internal word length requirements show that 24-bit precision in precalculated values of the Zernike mode partial derivatives ensures less than 0.5% error per Zernike coefficient and an overall error of <1%. The design has been synthesized on a Xilinx Spartan-6 XC6SLX45 FPGA. The resource utilisation on this device is <3% of slice registers, <15% of slice LUTs, and approximately 48% of available DSP blocks independent of the Shack-Hartmann grid size. Block RAM usage is <16% for Shack-Hartmann grid sizes up to 32×32.
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
NASA Astrophysics Data System (ADS)
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
CORRELATED ERRORS IN EARTH POINTING MISSIONS
NASA Technical Reports Server (NTRS)
Bilanow, Steve; Patt, Frederick S.
2005-01-01
Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor measurement residuals, so some independent checks using imaging sensors are essential and derived science instrument attitude measurements can prove quite valuable in assessing the attitude accuracy.
Obstetric Neuraxial Drug Administration Errors: A Quantitative and Qualitative Analytical Review.
Patel, Santosh; Loveridge, Robert
2015-12-01
Drug administration errors in obstetric neuraxial anesthesia can have devastating consequences. Although fully recognizing that they represent "only the tip of the iceberg," published case reports/series of these errors were reviewed in detail with the aim of estimating the frequency and the nature of these errors. We identified case reports and case series from MEDLINE and performed a quantitative analysis of the involved drugs, error setting, source of error, the observed complications, and any therapeutic interventions. We subsequently performed a qualitative analysis of the human factors involved and proposed modifications to practice. Twenty-nine cases were identified. Various drugs were given in error, but no direct effects on the course of labor, mode of delivery, or neonatal outcome were reported. Four maternal deaths from the accidental intrathecal administration of tranexamic acid were reported, all occurring after delivery of the fetus. A range of hemodynamic and neurologic signs and symptoms were noted, but the most commonly reported complication was the failure of the intended neuraxial anesthetic technique. Several human factors were present; most common factors were drug storage issues and similar drug appearance. Four practice recommendations were identified as being likely to have prevented the errors. The reported errors exposed latent conditions within health care systems. We suggest that the implementation of the following processes may decrease the risk of these types of drug errors: (1) Careful reading of the label on any drug ampule or syringe before the drug is drawn up or injected; (2) labeling all syringes; (3) checking labels with a second person or a device (such as a barcode reader linked to a computer) before the drug is drawn up or administered; and (4) use of non-Luer lock connectors on all epidural/spinal/combined spinal-epidural devices. Further study is required to determine whether routine use of these processes will reduce drug error.
Modeling of the Mode S tracking system in support of aircraft safety research
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1982-01-01
This report collects, documents, and models data relating the expected accuracies of tracking variables to be obtained from the FAA's Mode S Secondary Surveillance Radar system. The data include measured range and azimuth to the tracked aircraft plus the encoded altitude transmitted via the Mode S data link. A brief summary is made of the Mode S system status and its potential applications for aircraft safety improvement including accident analysis. FAA flight test results are presented demonstrating Mode S range and azimuth accuracy and error characteristics and comparing Mode S to the current ATCRBS radar tracking system. Data are also presented that describe the expected accuracy and error characteristics of encoded altitude. These data are used to formulate mathematical error models of the Mode S variables and encoded altitude. A brief analytical assessment is made of the real-time tracking accuracy available from using Mode S and how it could be improved with down-linked velocity.
Huang, Ai-Chun; Chen, Yu-Yawn; Chuang, Chih-Lin; Chiang, Li-Ming; Lu, Hsueh-Kuan; Lin, Hung-Chi; Chen, Kuen-Tsann; Hsiao, An-Chi; Hsieh, Kuen-Chang
2015-11-01
Bioelectrical impedance analysis (BIA) is commonly used to assess body composition. Cross-mode (left hand to right foot, Z(CR)) BIA presumably uses the longest current path in the human body, which may generate better results when estimating fat-free mass (FFM). We compared the cross-mode with the hand-to-foot mode (right hand to right foot, Z(HF)) using dual-energy x-ray absorptiometry (DXA) as the reference. We hypothesized that when comparing anthropometric parameters using stepwise regression analysis, the impedance value from the cross-mode analysis would have better prediction accuracy than that from the hand-to-foot mode analysis. We studied 264 men and 232 women (mean ages, 32.19 ± 14.95 and 34.51 ± 14.96 years, respectively; mean body mass indexes, 24.54 ± 3.74 and 23.44 ± 4.61 kg/m2, respectively). The DXA-measured FFMs in men and women were 58.85 ± 8.15 and 40.48 ± 5.64 kg, respectively. Multiple stepwise linear regression analyses were performed to construct sex-specific FFM equations. The correlations of FFM measured by DXA vs. FFM from hand-to-foot mode and estimated FFM by cross-mode were 0.85 and 0.86 in women, with standard errors of estimate of 2.96 and 2.92 kg, respectively. In men, they were 0.91 and 0.91, with standard errors of the estimates of 3.34 and 3.48 kg, respectively. Bland-Altman plots showed limits of agreement of -6.78 to 6.78 kg for FFM from hand-to-foot mode and -7.06 to 7.06 kg for estimated FFM by cross-mode for men, and -5.91 to 5.91 and -5.84 to 5.84 kg, respectively, for women. Paired t tests showed no significant differences between the 2 modes (P > .05). Hence, cross-mode BIA appears to represent a reasonable and practical application for assessing FFM in Chinese populations. Copyright © 2015 Elsevier Inc. All rights reserved.
Reason and Condition for Mode Kissing in MASW Method
NASA Astrophysics Data System (ADS)
Gao, Lingli; Xia, Jianghai; Pan, Yudi; Xu, Yixian
2016-05-01
Identifying correct modes of surface waves and picking accurate phase velocities are critical for obtaining an accurate S-wave velocity in MASW method. In most cases, inversion is easily conducted by picking the dispersion curves corresponding to different surface-wave modes individually. Neighboring surface-wave modes, however, will nearly meet (kiss) at some frequencies for some models. Around the frequencies, they have very close roots and energy peak shifts from one mode to another. At current dispersion image resolution, it is difficult to distinguish different modes when mode-kissing occurs, which is commonly seen in near-surface earth models. It will cause mode misidentification, and as a result, lead to a larger overestimation of S-wave velocity and error on depth. We newly defined two mode types based on the characteristics of the vertical eigendisplacements calculated by generalized reflection and transmission coefficient method. Rayleigh-wave mode near the kissing points (osculation points) change its type, that is to say, one Rayleigh-wave mode will contain different mode types. This mode type conversion will cause the mode-kissing phenomenon in dispersion images. Numerical tests indicate that the mode-kissing phenomenon is model dependent and that the existence of strong S-wave velocity contrasts increases the possibility of mode-kissing. The real-world data shows mode misidentification caused by mode-kissing phenomenon will result in higher S-wave velocity of bedrock. It reminds us to pay attention to this phenomenon when some of the underground information is known.
Role of memory errors in quantum repeaters
NASA Astrophysics Data System (ADS)
Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.
2007-03-01
We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.
A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test
NASA Technical Reports Server (NTRS)
Reeder, James R.
2002-01-01
The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.
Spatio-temporal filtering for determination of common mode error in regional GNSS networks
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Gruszczynski, Maciej; Figurski, Mariusz; Klos, Anna
2015-04-01
The spatial correlation between different stations for individual components in the regional GNSS networks seems to be significant. The mismodelling in satellite orbits, the Earth orientation parameters (EOP), largescale atmospheric effects or satellite antenna phase centre corrections can all cause the regionally correlated errors. This kind of GPS time series errors are referred to as common mode errors (CMEs). They are usually estimated with the regional spatial filtering, such as the "stacking". In this paper, we show the stacking approach for the set of ASG-EUPOS permanent stations, assuming that spatial distribution of the CME is uniform over the whole region of Poland (more than 600 km extent). The ASG-EUPOS is a multifunctional precise positioning system based on the reference network designed for Poland. We used a 5- year span time series (2008-2012) of daily solutions in the ITRF2008 from Bernese 5.0 processed by the Military University of Technology EPN Local Analysis Centre (MUT LAC). At the beginning of our analyses concerning spatial dependencies, the correlation coefficients between each pair of the stations in the GNSS network were calculated. This analysis shows that spatio-temporal behaviour of the GPS-derived time series is not purely random, but there is the evident uniform spatial response. In order to quantify the influence of filtering using CME, the norms L1 and L2 were determined. The values of these norms were calculated for the North, East and Up components twice: before performing the filtration and after stacking. The observed reduction of the L1 and L2 norms was up to 30% depending on the dimension of the network. However, the question how to define an optimal size of CME-analysed subnetwork remains unanswered in this research, due to the fact that our network is not extended enough.
NASA Technical Reports Server (NTRS)
1975-01-01
The trajectory simulation mode (SIMSEP) requires the namelist SIMSEP to follow TRAJ. The SIMSEP contains parameters which describe the scope of the simulation, expected dynamic errors, and cumulative statistics from previous SIMSEP runs. Following SIMSEP are a set of GUID namelists, one for each guidance correction maneuver. The GUID describes the strategy, knowledge or estimation uncertainties and cumulative statistics for that particular maneuver. The trajectory display mode (REFSEP) requires only the namelist TRAJ followed by scheduling cards, similar to those used in GODSEP. The fixed field schedule cards define: types of data displayed, span of interest, and frequency of printout. For those users who can vary the amount of blank common storage in their runs, a guideline to estimate the total MAPSEP core requirements is given. Blank common length is related directly to the dimension of the dynamic state (NDIM) used in transition matrix (STM) computation, and, the total augmented (knowledge) state (NAUG). The values of program and blank common must be added to compute the total decimal core for a CDC 6500. Other operating systems must scale these requirements appropriately.
Minimizing treatment planning errors in proton therapy using failure mode and effects analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com; Johnson, Randall; Larson, Gary
Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authorsmore » estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.« less
Minimizing treatment planning errors in proton therapy using failure mode and effects analysis.
Zheng, Yuanshui; Johnson, Randall; Larson, Gary
2016-06-01
Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.
NASA Astrophysics Data System (ADS)
Hinton, Courtney; Punjabi, Alkesh; Ali, Halima
2008-11-01
The simple map is the simplest map that has topology of divertor tokamaks [1]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [2]. Action-angle coordinates for simple map can not be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories can not cross separatrix [2]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to magnetic noise and field errors. Mode numbers for noise + field errors from the DIII-D tokamak are used. Mode numbers are (m,n)=(3,1), (4,1), (6,2), (7,2), (8,2), (9,3), (10,3), (11,3), (12,3) [3]. The common amplitude δ is varied from 0.8X10-5 to 2.0X10-5. For this noise and field errors, the width of stochastic layer in simple map is calculated. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793 1. A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007). 2. O. Kerwin, A. Punjabi, and H. Ali, to appear in Physics of Plasmas. 3. A. Punjabi and H. Ali, P1.012, 35^th EPS Conference on Plasma Physics, June 9-13, 2008, Hersonissos, Crete, Greece.
Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump
NASA Astrophysics Data System (ADS)
Gontcharov, G. A.
2017-08-01
Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.
Mitigating leakage errors due to cavity modes in a superconducting quantum computer
NASA Astrophysics Data System (ADS)
McConkey, T. G.; Béjanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Pagel, Z.; Rinehart, J. R.; Mariantoni, M.
2018-07-01
A practical quantum computer requires quantum bit (qubit) operations with low error probabilities in extensible architectures. We study a packaging method that makes it possible to address hundreds of superconducting qubits by means of coaxial Pogo pins. A qubit chip is housed in a superconducting box, where both box and chip dimensions lead to unwanted modes that can interfere with qubit operations. We analyze these interference effects in the context of qubit coherent leakage and qubit decoherence induced by damped modes. We propose two methods, half-wave fencing and antinode pinning, to mitigate the resulting errors by detuning the resonance frequency of the modes from the qubit frequency. We perform electromagnetic field simulations indicating that the resonance frequency of the modes increases with the number of installed pins and can be engineered to be significantly higher than the highest qubit frequency. We estimate that the error probabilities and decoherence rates due to suitably shifted modes in realistic scenarios can be up to two orders of magnitude lower than the state-of-the-art superconducting qubit error and decoherence rates. Our methods can be extended to different types of packages that do not rely on Pogo pins. Conductive bump bonds, for example, can serve the same purpose in qubit architectures based on flip chip technology. Metalized vias, instead, can be used to mitigate modes due to the increasing size of the dielectric substrate on which qubit arrays are patterned.
Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.
1995-01-01
This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.
NASA Technical Reports Server (NTRS)
Hamer, H. A.; Johnson, K. G.
1986-01-01
An analysis was performed to determine the effects of model error on the control of a large flexible space antenna. Control was achieved by employing two three-axis control-moment gyros (CMG's) located on the antenna column. State variables were estimated by including an observer in the control loop that used attitude and attitude-rate sensors on the column. Errors were assumed to exist in the individual model parameters: modal frequency, modal damping, mode slope (control-influence coefficients), and moment of inertia. Their effects on control-system performance were analyzed either for (1) nulling initial disturbances in the rigid-body modes, or (2) nulling initial disturbances in the first three flexible modes. The study includes the effects on stability, time to null, and control requirements (defined as maximum torque and total momentum), as well as on the accuracy of obtaining initial estimates of the disturbances. The effects on the transients of the undisturbed modes are also included. The results, which are compared for decoupled and linear quadratic regulator (LQR) control procedures, are shown in tabular form, parametric plots, and as sample time histories of modal-amplitude and control responses. Results of the analysis showed that the effects of model errors on the control-system performance were generally comparable for both control procedures. The effect of mode-slope error was the most serious of all model errors.
SU-E-T-192: FMEA Severity Scores - Do We Really Know?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonigan, J; Johnson, J; Kry, S
2014-06-01
Purpose: Failure modes and effects analysis (FMEA) is a subjective risk mitigation technique that has not been applied to physics-specific quality management practices. There is a need for quantitative FMEA data as called for in the literature. This work focuses specifically on quantifying FMEA severity scores for physics components of IMRT delivery and comparing to subjective scores. Methods: Eleven physical failure modes (FMs) for head and neck IMRT dose calculation and delivery are examined near commonly accepted tolerance criteria levels. Phantom treatment planning studies and dosimetry measurements (requiring decommissioning in several cases) are performed to determine the magnitude of dosemore » delivery errors for the FMs (i.e., severity of the FM). Resultant quantitative severity scores are compared to FMEA scores obtained through an international survey and focus group studies. Results: Physical measurements for six FMs have resulted in significant PTV dose errors up to 4.3% as well as close to 1 mm significant distance-to-agreement error between PTV and OAR. Of the 129 survey responses, the vast majority of the responders used Varian machines with Pinnacle and Eclipse planning systems. The average years of experience was 17, yet familiarity with FMEA less than expected. Survey reports perception of dose delivery error magnitude varies widely, in some cases 50% difference in dose delivery error expected amongst respondents. Substantial variance is also seen for all FMs in occurrence, detectability, and severity scores assigned with average variance values of 5.5, 4.6, and 2.2, respectively. Survey shows for MLC positional FM(2mm) average of 7.6% dose error expected (range 0–50%) compared to 2% error seen in measurement. Analysis of ranking in survey, treatment planning studies, and quantitative value comparison will be presented. Conclusion: Resultant quantitative severity scores will expand the utility of FMEA for radiotherapy and verify accuracy of FMEA results compared to highly variable subjective scores.« less
NASA Astrophysics Data System (ADS)
Rivière, G.; Hua, B. L.
2004-10-01
A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.
NASA Astrophysics Data System (ADS)
Udovydchenkov, Ilya A.
2017-07-01
Modal pulses are broadband contributions to an acoustic wave field with fixed mode number. Stable weakly dispersive modal pulses (SWDMPs) are special modal pulses that are characterized by weak dispersion and weak scattering-induced broadening and are thus suitable for communications applications. This paper investigates, using numerical simulations, receiver array requirements for recovering information carried by SWDMPs under various signal-to-noise ratio conditions without performing channel equalization. Two groups of weakly dispersive modal pulses are common in typical mid-latitude deep ocean environments: the lowest order modes (typically modes 1-3 at 75 Hz), and intermediate order modes whose waveguide invariant is near-zero (often around mode 20 at 75 Hz). Information loss is quantified by the bit error rate (BER) of a recovered binary phase-coded signal. With fixed receiver depths, low BERs (less than 1%) are achieved at ranges up to 400 km with three hydrophones for mode 1 with 90% probability and with 34 hydrophones for mode 20 with 80% probability. With optimal receiver depths, depending on propagation range, only a few, sometimes only two, hydrophones are often sufficient for low BERs, even with intermediate mode numbers. Full modal resolution is unnecessary to achieve low BERs. Thus, a flexible receiver array of autonomous vehicles can outperform a cabled array.
Locked-mode avoidance and recovery without momentum input
NASA Astrophysics Data System (ADS)
Delgado-Aparicio, L.; Rice, J. E.; Wolfe, S.; Cziegler, I.; Gao, C.; Granetz, R.; Wukitch, S.; Terry, J.; Greenwald, M.; Sugiyama, L.; Hubbard, A.; Hugges, J.; Marmar, E.; Phillips, P.; Rowan, W.
2015-11-01
Error-field-induced locked-modes (LMs) have been studied in Alcator C-Mod at ITER-Bϕ, without NBI fueling and momentum input. Delay of the mode-onset and locked-mode recovery has been successfully obtained without external momentum input using Ion Cyclotron Resonance Heating (ICRH). The use of external heating in-sync with the error-field ramp-up resulted in a successful delay of the mode-onset when PICRH > 1 MW, which demonstrates the existence of a power threshold to ``unlock'' the mode; in the presence of an error field the L-mode discharge can transition into H-mode only when PICRH > 2 MW and at high densities, avoiding also the density pump-out. The effects of ion heating observed on unlocking the core plasma may be due to ICRH induced flows in the plasma boundary, or modifications of plasma profiles that changed the underlying turbulence. This work was performed under US DoE contracts including DE-FC02-99ER54512 and others at MIT, DE-FG03-96ER-54373 at University of Texas at Austin, and DE-AC02-09CH11466 at PPPL.
Error field detection in DIII-D by magnetic steering of locked modes
Shiraki, Daisuke; La Haye, Robert J.; Logan, Nikolas C.; ...
2014-02-20
Optimal correction coil currents for the n = 1 intrinsic error field of the DIII-D tokamak are inferred by applying a rotating external magnetic perturbation to steer the phase of a saturated locked mode with poloidal/toroidal mode number m/n = 2/1. The error field is detected non-disruptively in a single discharge, based on the toroidal torque balance of the resonant surface, which is assumed to be dominated by the balance of resonant electromagnetic torques. This is equivalent to the island being locked at all times to the resonant 2/1 component of the total of the applied and intrinsic error fields,more » such that the deviation of the locked mode phase from the applied field phase depends on the existing error field. The optimal set of correction coil currents is determined to be those currents which best cancels the torque from the error field, based on fitting of the torque balance model. The toroidal electromagnetic torques are calculated from experimental data using a simplified approach incorporating realistic DIII-D geometry, and including the effect of the plasma response on island torque balance based on the ideal plasma response to external fields. This method of error field detection is demonstrated in DIII-D discharges, and the results are compared with those based on the onset of low-density locked modes in ohmic plasmas. Furthermore, this magnetic steering technique presents an efficient approach to error field detection and is a promising method for ITER, particularly during initial operation when the lack of auxiliary heating systems makes established techniques based on rotation or plasma amplification unsuitable.« less
Lee, Eun-Gu; Mun, Sil-Gu; Lee, Sang Soo; Lee, Jyung Chan; Lee, Jong Hyun
2015-01-12
We report a cost-effective transmitter optical sub-assembly using a monolithic four-wavelength vertical-cavity surface-emitting laser (VCSEL) array with 100-GHz wavelength spacing for future-proof mobile fronthaul transport using the data rate of common public radio interface option 6. The wavelength spacing is achieved using selectively etched cavity control layers and fine current adjustment. The differences in operating current and output power for maintaining the wavelength spacing of four VCSELs are <1.4 mA and <1 dB, respectively. Stable operation performance without mode hopping is observed, and error-free transmission under direct modulation is demonstrated over a 20-km single-mode fiber without any dispersion-compensation techniques.
Twisted light transmission over 143 km
Krenn, Mario; Handsteiner, Johannes; Fink, Matthias; Fickler, Robert; Ursin, Rupert; Zeilinger, Anton
2016-01-01
Spatial modes of light can potentially carry a vast amount of information, making them promising candidates for both classical and quantum communication. However, the distribution of such modes over large distances remains difficult. Intermodal coupling complicates their use with common fibers, whereas free-space transmission is thought to be strongly influenced by atmospheric turbulence. Here, we show the transmission of orbital angular momentum modes of light over a distance of 143 km between two Canary Islands, which is 50× greater than the maximum distance achieved previously. As a demonstration of the transmission quality, we use superpositions of these modes to encode a short message. At the receiver, an artificial neural network is used for distinguishing between the different twisted light superpositions. The algorithm is able to identify different mode superpositions with an accuracy of more than 80% up to the third mode order and decode the transmitted message with an error rate of 8.33%. Using our data, we estimate that the distribution of orbital angular momentum entanglement over more than 100 km of free space is feasible. Moreover, the quality of our free-space link can be further improved by the use of state-of-the-art adaptive optics systems. PMID:27856744
Twisted light transmission over 143 km
NASA Astrophysics Data System (ADS)
Krenn, Mario; Handsteiner, Johannes; Fink, Matthias; Fickler, Robert; Ursin, Rupert; Malik, Mehul; Zeilinger, Anton
2016-11-01
Spatial modes of light can potentially carry a vast amount of information, making them promising candidates for both classical and quantum communication. However, the distribution of such modes over large distances remains difficult. Intermodal coupling complicates their use with common fibers, whereas free-space transmission is thought to be strongly influenced by atmospheric turbulence. Here, we show the transmission of orbital angular momentum modes of light over a distance of 143 km between two Canary Islands, which is 50× greater than the maximum distance achieved previously. As a demonstration of the transmission quality, we use superpositions of these modes to encode a short message. At the receiver, an artificial neural network is used for distinguishing between the different twisted light superpositions. The algorithm is able to identify different mode superpositions with an accuracy of more than 80% up to the third mode order and decode the transmitted message with an error rate of 8.33%. Using our data, we estimate that the distribution of orbital angular momentum entanglement over more than 100 km of free space is feasible. Moreover, the quality of our free-space link can be further improved by the use of state-of-the-art adaptive optics systems.
Twisted light transmission over 143 km.
Krenn, Mario; Handsteiner, Johannes; Fink, Matthias; Fickler, Robert; Ursin, Rupert; Malik, Mehul; Zeilinger, Anton
2016-11-29
Spatial modes of light can potentially carry a vast amount of information, making them promising candidates for both classical and quantum communication. However, the distribution of such modes over large distances remains difficult. Intermodal coupling complicates their use with common fibers, whereas free-space transmission is thought to be strongly influenced by atmospheric turbulence. Here, we show the transmission of orbital angular momentum modes of light over a distance of 143 km between two Canary Islands, which is 50× greater than the maximum distance achieved previously. As a demonstration of the transmission quality, we use superpositions of these modes to encode a short message. At the receiver, an artificial neural network is used for distinguishing between the different twisted light superpositions. The algorithm is able to identify different mode superpositions with an accuracy of more than 80% up to the third mode order and decode the transmitted message with an error rate of 8.33%. Using our data, we estimate that the distribution of orbital angular momentum entanglement over more than 100 km of free space is feasible. Moreover, the quality of our free-space link can be further improved by the use of state-of-the-art adaptive optics systems.
NASA Astrophysics Data System (ADS)
Yu, Zhicheng; Peng, Kai; Liu, Xiaokang; Pu, Hongji; Chen, Ziran
2018-05-01
High-precision displacement sensors, which can measure large displacements with nanometer resolution, are key components in many ultra-precision fabrication machines. In this paper, a new capacitive nanometer displacement sensor with differential sensing structure is proposed for long-range linear displacement measurements based on an approach denoted time grating. Analytical models established using electric field coupling theory and an area integral method indicate that common-mode interference will result in a first-harmonic error in the measurement results. To reduce the common-mode interference, the proposed sensor design employs a differential sensing structure, which adopts a second group of induction electrodes spatially separated from the first group of induction electrodes by a half-pitch length. Experimental results based on a prototype sensor demonstrate that the measurement accuracy and the stability of the sensor are substantially improved after adopting the differential sensing structure. Finally, a prototype sensor achieves a measurement accuracy of ±200 nm over the full 200 mm measurement range of the sensor.
Thin Film Differential Photosensor for Reduction of Temperature Effects in Lab-on-Chip Applications.
de Cesare, Giampiero; Carpentiero, Matteo; Nascetti, Augusto; Caputo, Domenico
2016-02-20
This paper presents a thin film structure suitable for low-level radiation measurements in lab-on-chip systems that are subject to thermal treatments of the analyte and/or to large temperature variations. The device is the series connection of two amorphous silicon/amorphous silicon carbide heterojunctions designed to perform differential current measurements. The two diodes experience the same temperature, while only one is exposed to the incident radiation. Under these conditions, temperature and light are the common and differential mode signals, respectively. A proper electrical connection reads the differential current of the two diodes (ideally the photocurrent) as the output signal. The experimental characterization shows the benefits of the differential structure in minimizing the temperature effects with respect to a single diode operation. In particular, when the temperature varies from 23 to 50 °C, the proposed device shows a common mode rejection ratio up to 24 dB and reduces of a factor of three the error in detecting very low-intensity light signals.
Thin Film Differential Photosensor for Reduction of Temperature Effects in Lab-on-Chip Applications
de Cesare, Giampiero; Carpentiero, Matteo; Nascetti, Augusto; Caputo, Domenico
2016-01-01
This paper presents a thin film structure suitable for low-level radiation measurements in lab-on-chip systems that are subject to thermal treatments of the analyte and/or to large temperature variations. The device is the series connection of two amorphous silicon/amorphous silicon carbide heterojunctions designed to perform differential current measurements. The two diodes experience the same temperature, while only one is exposed to the incident radiation. Under these conditions, temperature and light are the common and differential mode signals, respectively. A proper electrical connection reads the differential current of the two diodes (ideally the photocurrent) as the output signal. The experimental characterization shows the benefits of the differential structure in minimizing the temperature effects with respect to a single diode operation. In particular, when the temperature varies from 23 to 50 °C, the proposed device shows a common mode rejection ratio up to 24 dB and reduces of a factor of three the error in detecting very low-intensity light signals. PMID:26907292
The importance of matched poloidal spectra to error field correction in DIII-D
Paz-Soldan, Carlos; Lanctot, Matthew J.; Logan, Nikolas C.; ...
2014-07-09
Optimal error field correction (EFC) is thought to be achieved when coupling to the least-stable "dominant" mode of the plasma is nulled at each toroidal mode number ( n). The limit of this picture is tested in the DIII-D tokamak by applying superpositions of in- and ex-vessel coil set n = 1 fields calculated to be fully orthogonal to the n = 1 dominant mode. In co-rotating H-mode and low-density Ohmic scenarios the plasma is found to be respectively 7x and 20x less sensitive to the orthogonal field as compared to the in-vessel coil set field. For the scenarios investigated,more » any geometry of EFC coil can thus recover a strong majority of the detrimental effect introduced by the n = 1 error field. Furthermore, despite low sensitivity to the orthogonal field, its optimization in H-mode is shown to be consistent with minimizing the neoclassical toroidal viscosity torque and not the higher-order n = 1 mode coupling.« less
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R
2016-01-01
The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.
NASA Astrophysics Data System (ADS)
Jiang, Weiping; Ma, Jun; Li, Zhao; Zhou, Xiaohui; Zhou, Boye
2018-05-01
The analysis of the correlations between the noise in different components of GPS stations has positive significance to those trying to obtain more accurate uncertainty of velocity with respect to station motion. Previous research into noise in GPS position time series focused mainly on single component evaluation, which affects the acquisition of precise station positions, the velocity field, and its uncertainty. In this study, before and after removing the common-mode error (CME), we performed one-dimensional linear regression analysis of the noise amplitude vectors in different components of 126 GPS stations with a combination of white noise, flicker noise, and random walking noise in Southern California. The results show that, on the one hand, there are above-moderate degrees of correlation between the white noise amplitude vectors in all components of the stations before and after removal of the CME, while the correlations between flicker noise amplitude vectors in horizontal and vertical components are enhanced from un-correlated to moderately correlated by removing the CME. On the other hand, the significance tests show that, all of the obtained linear regression equations, which represent a unique function of the noise amplitude in any two components, are of practical value after removing the CME. According to the noise amplitude estimates in two components and the linear regression equations, more accurate noise amplitudes can be acquired in the two components.
The Efffect of Image Apodization on Global Mode Parameters and Rotational Inversions
NASA Astrophysics Data System (ADS)
Larson, Tim; Schou, Jesper
2016-10-01
It has long been known that certain systematic errors in the global mode analysis of data from both MDI and HMI depend on how the input images were apodized. Recently it has come to light, while investigating a six-month period in f-mode frequencies, that mode coverage is highest when B0 is maximal. Recalling that the leakage matrix is calculated in the approximation that B0=0, it comes as a surprise that more modes are fitted when the leakage matrix is most incorrect. It is now believed that the six-month oscillation has primarily to do with what portion of the solar surface is visible. Other systematic errors that depend on the part of the disk used include high-latitude anomalies in the rotation rate and a prominent feature in the normalized residuals of odd a-coefficients. Although the most likely cause of all these errors is errors in the leakage matrix, extensive recalculation of the leaks has not made any difference. Thus we conjecture that another effect may be at play, such as errors in the noise model or one that has to do with the alignment of the apodization with the spherical harmonics. In this poster we explore how differently shaped apodizations affect the results of inversions for internal rotation, for both maximal and minimal absolute values of B0.
Sarter, Nadine
2008-06-01
The goal of this article is to illustrate the problem-driven, cumulative, and highly interdisciplinary nature of human factors research by providing a brief overview of the work on mode errors on modern flight decks over the past two decades. Mode errors on modem flight decks were first reported in the late 1980s. Poor feedback, inadequate mental models of the automation, and the high degree of coupling and complexity of flight deck systems were identified as main contributors to these breakdowns in human-automation interaction. Various improvements of design, training, and procedures were proposed to address these issues. The author describes when and why the problem of mode errors surfaced, summarizes complementary research activities that helped identify and understand the contributing factors to mode errors, and describes some countermeasures that have been developed in recent years. This brief review illustrates how one particular human factors problem in the aviation domain enabled various disciplines and methodological approaches to contribute to a better understanding of, as well as provide better support for, effective human-automation coordination. Converging operations and interdisciplinary collaboration over an extended period of time are hallmarks of successful human factors research. The reported body of research can serve as a model for future research and as a teaching tool for students in this field of work.
Analysis of ecstasy tablets: comparison of reflectance and transmittance near infrared spectroscopy.
Schneider, Ralph Carsten; Kovar, Karl-Artur
2003-07-08
Calibration models for the quantitation of commonly used ecstasy substances have been developed using near infrared spectroscopy (NIR) in diffuse reflectance and in transmission mode by applying seized ecstasy tablets for model building and validation. The samples contained amphetamine, N-methyl-3,4-methylenedioxy-amphetamine (MDMA) and N-ethyl-3,4-methylenedioxy-amphetamine (MDE) in different concentrations. All tablets were analyzed using high performance liquid chromatography (HPLC) with diode array detection as reference method. We evaluated the performance of each NIR measurement method with regard to its ability to predict the content of each tablet with a low root mean square error of prediction (RMSEP). Best calibration models could be generated by using NIR measurement in transmittance mode with wavelength selection and 1/x-transformation of the raw data. The models build in reflectance mode showed higher RMSEPs using as data pretreatment, wavelength selection, 1/x-transformation and a second order Savitzky-Golay derivative with five point smoothing was applied to obtain the best models. To estimate the influence of inhomogeneities in the illegal tablets, a calibration of the destroyed, i.e. triturated samples was build and compared to the corresponding data of the whole tablets. The calibrations using these homogenized tablets showed lower RMSEPs. We can conclude that NIR analysis of ecstasy tablets in transmission mode is more suitable than measurement in diffuse reflectance to obtain quantification models for their active ingredients with regard to low errors of prediction. Inhomogeneities in the samples are equalized when measuring the tablets as powdered samples.
NASA Astrophysics Data System (ADS)
Volpe, F. A.; Frassinetti, L.; Brunsell, P. R.; Drake, J. R.; Olofsson, K. E. J.
2013-04-01
A new non-disruptive error field (EF) assessment technique not restricted to low density and thus low beta was demonstrated at the EXTRAP-T2R reversed field pinch. Stable and marginally stable external kink modes of toroidal mode number n = 10 and n = 8, respectively, were generated, and their rotation sustained, by means of rotating magnetic perturbations of the same n. Due to finite EFs, and in spite of the applied perturbations rotating uniformly and having constant amplitude, the kink modes were observed to rotate non-uniformly and be modulated in amplitude. This behaviour was used to precisely infer the amplitude and approximately estimate the toroidal phase of the EF. A subsequent scan permitted to optimize the toroidal phase. The technique was tested against deliberately applied as well as intrinsic EFs of n = 8 and 10. Corrections equal and opposite to the estimated error fields were applied. The efficacy of the error compensation was indicated by the increased discharge duration and more uniform mode rotation in response to a uniformly rotating perturbation. The results are in good agreement with theory, and the extension to lower n, to tearing modes and to tokamaks, including ITER, is discussed.
Submicron multi-bunch BPM for CLIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmickler, H.; Soby, L.; /CERN
2010-08-01
A common-mode free cavity BPM is currently under development at Fermilab within the ILC-CLIC collaboration. This monitor will be operated in a CLIC Main Linac multi-bunch regime, and needs to provide both, high spatial and time resolution. We present the design concept, numerical analysis, investigation on tolerances and error effects, as well as simulations on the signal response applying a multi-bunch stimulus. The proposed CERN linear collider (CLIC) requires a very precise measurement of beam trajectory to preserve the low emittance when transporting the beam through the Main Linac. An energy chirp within the bunch train will be applied tomore » measure and minimize the dispersion effects, which require high resolution (in both, time and space) beam position monitors (BPM) along the beam-line. We propose a low-Q waveguide loaded TM{sub 110} dipole mode cavity as BPM, which is complemented by a TM{sub 010} monopole mode resonator of same resonant frequency for reference signal purposes. The design is based on a well known TM{sub 110} selective mode coupling idea.« less
Electrocardiograms with pacemakers: accuracy of computer reading.
Guglin, Maya E; Datwani, Neeta
2007-04-01
We analyzed the accuracy with which a computer algorithm reads electrocardiograms (ECGs) with electronic pacemakers (PMs). Electrocardiograms were screened for the presence of electronic pacing spikes. Computer-derived interpretations were compared with cardiologists' readings. Computer-drawn interpretations required revision by cardiologists in 61.3% of cases. In 18.4% of cases, the ECG reading algorithm failed to recognize the presence of a PM. The misinterpretation of paced beats as intrinsic beats led to multiple secondary errors, including myocardial infarctions in varying localization. The most common error in computer reading was the failure to identify an underlying rhythm. This error caused frequent misidentification of the PM type, especially when the presence of normal sinus rhythm was not recognized in a tracing with a DDD PM tracking the atrial activity. The increasing number of pacing devices, and the resulting number of ECGs with pacing spikes, mandates the refining of ECG reading algorithms. Improvement is especially needed in the recognition of the underlying rhythm, pacing spikes, and mode of pacing.
Gravitational wave spectroscopy of binary neutron star merger remnants with mode stacking
NASA Astrophysics Data System (ADS)
Yang, Huan; Paschalidis, Vasileios; Yagi, Kent; Lehner, Luis; Pretorius, Frans; Yunes, Nicolás
2018-01-01
A binary neutron star coalescence event has recently been observed for the first time in gravitational waves, and many more detections are expected once current ground-based detectors begin operating at design sensitivity. As in the case of binary black holes, gravitational waves generated by binary neutron stars consist of inspiral, merger, and postmerger components. Detecting the latter is important because it encodes information about the nuclear equation of state in a regime that cannot be probed prior to merger. The postmerger signal, however, can only be expected to be measurable by current detectors for events closer than roughly ten megaparsecs, which given merger rate estimates implies a low probability of observation within the expected lifetime of these detectors. We carry out Monte Carlo simulations showing that the dominant postmerger signal (the ℓ=m =2 mode) from individual binary neutron star mergers may not have a good chance of observation even with the most sensitive future ground-based gravitational wave detectors proposed so far (the Einstein Telescope and Cosmic Explorer, for certain equations of state, assuming a full year of operation, the latest merger rates, and a detection threshold corresponding to a signal-to-noise ratio of 5). For this reason, we propose two methods that stack the postmerger signal from multiple binary neutron star observations to boost the postmerger detection probability. The first method follows a commonly used practice of multiplying the Bayes factors of individual events. The second method relies on an assumption that the mode phase can be determined from the inspiral waveform, so that coherent mode stacking of the data from different events becomes possible. We find that both methods significantly improve the chances of detecting the dominant postmerger signal, making a detection very likely after a year of observation with Cosmic Explorer for certain equations of state. We also show that in terms of detection, coherent stacking is more efficient in accumulating confidence for the presence of postmerger oscillations in a signal than the first method. Moreover, assuming the postmerger signal is detected with Cosmic Explorer via stacking, we estimate through a Fisher analysis that the peak frequency can be measured to a statistical error of ˜4 - 20 Hz for certain equations of state. Such an error corresponds to a neutron star radius measurement to within ˜15 - 56 m , a fractional relative error ˜4 %, suggesting that systematic errors from theoretical modeling (≳100 m ) may dominate the error budget.
Polarization-insensitive PAM-4-carrying free-space orbital angular momentum (OAM) communications.
Liu, Jun; Wang, Jian
2016-02-22
We present a simple configuration incorporating single polarization-sensitive phase-only liquid crystal spatial light modulator (SLM) to facilitate polarization-insensitive free-space optical communications employing orbital angular momentum (OAM) modes. We experimentally demonstrate several polarization-insensitive optical communication subsystems by propagating a single OAM mode, multicasting 4 and 10 OAM modes, and multiplexing 8 OAM modes, respectively. Free-space polarization-insensitive optical communication links using OAM modes that carry four-level pulse-amplitude modulation (PAM-4) signal are demonstrated in the experiment. The observed optical signal-to-noise ratio (OSNR) penalties are less than 1 dB in both polarization-insensitive N-fold OAM modes multicasting and multiple OAM modes multiplexing at a bit-error rate (BER) of 2e-3 (enhanced forward-error correction (EFEC) threshold).
Failure mode and effects analysis: an empirical comparison of failure mode scoring procedures.
Ashley, Laura; Armitage, Gerry
2010-12-01
To empirically compare 2 different commonly used failure mode and effects analysis (FMEA) scoring procedures with respect to their resultant failure mode scores and prioritization: a mathematical procedure, where scores are assigned independently by FMEA team members and averaged, and a consensus procedure, where scores are agreed on by the FMEA team via discussion. A multidisciplinary team undertook a Healthcare FMEA of chemotherapy administration. This included mapping the chemotherapy process, identifying and scoring failure modes (potential errors) for each process step, and generating remedial strategies to counteract them. Failure modes were scored using both an independent mathematical procedure and a team consensus procedure. Almost three-fifths of the 30 failure modes generated were scored differently by the 2 procedures, and for just more than one-third of cases, the score discrepancy was substantial. Using the Healthcare FMEA prioritization cutoff score, almost twice as many failure modes were prioritized by the consensus procedure than by the mathematical procedure. This is the first study to empirically demonstrate that different FMEA scoring procedures can score and prioritize failure modes differently. It found considerable variability in individual team members' opinions on scores, which highlights the subjective and qualitative nature of failure mode scoring. A consensus scoring procedure may be most appropriate for FMEA as it allows variability in individuals' scores and rationales to become apparent and to be discussed and resolved by the team. It may also yield team learning and communication benefits unlikely to result from a mathematical procedure.
Follow on Researches for X-56A Aircraft at NASA Dryden Flight Research Center (Progress Report)
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2012-01-01
A lot of composite materials are used for the modern aircraft to reduce its weight. Aircraft aeroservoelastic models are typically characterized by significant levels of model parameter uncertainty due to composite manufacturing process. Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of X-56A aircraft is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes is based on the flutter analysis of X-56A aircraft. It should be noted that for all three Mach number cases rigid body modes and mode numbers seven and nine are participated 89.1 92.4 % of the first flutter mode. Modal participation of the rigid body mode and mode numbers seven and nine for the second flutter mode are 94.6 96.4%. Rigid body mode and the first two anti-symmetric modes, eighth and tenth modes, are participated 93.2 94.6% of the third flutter mode. Therefore, rigid body modes and the first four flexible modes of X-56A aircraft are the primary modes during the model tuning procedure. The ground vibration test-validated structural dynamic finite element model of the X-56A aircraft is to obtain in this study. The structural dynamics finite element model of X-56A aircraft is improved using the parallelized big-bang big-crunch algorithm together with a hybrid optimization technique.
NASA Astrophysics Data System (ADS)
Gao, Lingli; Pan, Yudi
2018-05-01
The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.
[Failure modes and effects analysis in the prescription, validation and dispensing process].
Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T
2012-01-01
To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.
Smith, D N
1992-01-01
Multiple applied current impedance measurement systems require numbers of current sources which operate simultaneously at the same frequency and within the same phase but at variable amplitudes. Investigations into the performance of some integrated operational transconductance amplifiers as variable current sources are described. Measurements of breakthrough, non-linearity and common-mode output levels for LM13600, NE5517 and CA3280 were carried out. The effects of such errors on the overall performance and stability of multiple current systems when driving floating loads are considered.
NASA Astrophysics Data System (ADS)
Milione, Giovanni; Lavery, Martin P. J.; Huang, Hao; Ren, Yongxiong; Xie, Guodong; Nguyen, Thien An; Karimi, Ebrahim; Marrucci, Lorenzo; Nolan, Daniel A.; Alfano, Robert R.; Willner, Alan E.
2015-05-01
Vector modes are spatial modes that have spatially inhomogeneous states of polarization, such as, radial and azimuthal polarization. They can produce smaller spot sizes and stronger longitudinal polarization components upon focusing. As a result, they are used for many applications, including optical trapping and nanoscale imaging. In this work, vector modes are used to increase the information capacity of free space optical communication via the method of optical communication referred to as mode division multiplexing. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a q-plate is introduced. As a proof of principle, using the mode (de)multiplexer four vector modes each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel (~1550nm), comprising an aggregate 80 Gbit/s, were transmitted ~1m over the lab table with <-16.4 dB (<2%) mode crosstalk. Bit error rates for all vector modes were measured at the forward error correction threshold with power penalties < 3.41dB.
Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D
NASA Astrophysics Data System (ADS)
La Haye, R. J.; Paz-Soldan, C.; Strait, E. J.
2015-02-01
DIII-D experiments show that fully penetrated resonant n = 1 error field locked modes in ohmic plasmas with safety factor q95 ≳ 3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n = 2/1) static error fields are shielded in ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption. Error field correction (EFC) is performed on DIII-D (in ITER relevant shape and safety factor q95 ≳ 3) with either the n = 1 C-coil (no handedness) or the n = 1 I-coil (with ‘dominantly’ resonant field pitch). Despite EFC, which allows significantly lower plasma density (a ‘figure of merit’) before penetration occurs, the resulting saturated islands have similar large size; they differ only in the phase of the locked mode after typically being pulled (by up to 30° toroidally) in the electron diamagnetic drift direction as they grow to saturation. Island amplification and phase shift are explained by a second change-of-state in which the classical tearing index changes from stable to marginal by the presence of the island, which changes the current density profile. The eventual island size is thus governed by the inherent stability and saturation mechanism rather than the driving error field.
Concept of a Fast and Simple Atmospheric Radiative Transfer Model for Aerosol Retrieval
NASA Astrophysics Data System (ADS)
Seidel, Felix; Kokhanovsky, Alexander A.
2010-05-01
Radiative transfer modelling (RTM) is an indispensable tool for a number of applications, including astrophysics, climate studies and quantitative remote sensing. It simulates the attenuation of light through a translucent medium. Here, we look at the scattering and absorption of solar light on its way to the Earth's surface and back to space or back into a remote sensing instrument. RTM is regularly used in the framework of the so-called atmospheric correction to find properties of the surface. Further, RTM can be inverted to retrieve features of the atmosphere, such as the aerosol optical depth (AOD), for instance. Present-day RTM, such as 6S, MODTRAN, SHARM, RT3, SCIATRAN or RTMOM have errors of only a few percent, however they are rather slow and often not easy to use. We present here a concept for a fast and simple RTM model in the visible spectral range. It is using a blend of different existing RTM approaches with a special emphasis on fast approximative analytical equations and parametrizations. This concept may be helpful for efficient retrieval algorithms, which do not have to rely on the classic look-up-tables (LUT) approach. For example, it can be used to retrieve AOD without complex inversion procedures including multiple iterations. Naturally, there is always a trade-off between speed and modelling accuracy. The code can be run therefore in two different modes. The regular mode provides a reasonable ratio between speed and accuracy, while the optional mode is very fast but less accurate. The normal mode approximates the diffuse scattered light by calculating the first (single scattering) and second order of scattering according to the classical method of successive orders of scattering. The very fast mode calculates only the single scattering approximation, which does not need any slow numerical integration procedure, and uses a simple correction factor to account for multiple scattering. This factor is a parametrization of MODTRAN results, which provide a typical ratio between single and multiple scattered light. A comparison of the presented RTM concept to the widely accepted 6S RTM reveals errors of up to 10% in standard mode. This is acceptable for certain applications. The very fast mode may lead to errors of up to 30%, but it is still able to reproduce qualitatively the results of 6S. An experimental implementation of this RTM concept is written in the common IDL language. It is therefore very flexible and straightforward to be implemented into custom retrieval algorithms of the remote sensing community. The code might also be used to add an atmosphere on top of an existing vegetation-canopy or water RTM. Due to the ease of use of the RTM code and the comprehensibility of the internal equations, the concept might be useful for educational purposes as well. The very fast mode could be of interest for a real-time applications, such as an in-flight instrument performance check for airborne optical sensors. In the future, the concept can be extended to account for scattering according to Mie theory, polarization and gaseous absorption. It is expected that this would reduce the model error to 5% or less.
Performance of GPS-devices for environmental exposure assessment.
Beekhuizen, Johan; Kromhout, Hans; Huss, Anke; Vermeulen, Roel
2013-01-01
Integration of individual time-location patterns with spatially resolved exposure maps enables a more accurate estimation of personal exposures to environmental pollutants than using estimates at fixed locations. Current global positioning system (GPS) devices can be used to track an individual's location. However, information on GPS-performance in environmental exposure assessment is largely missing. We therefore performed two studies. First, a commute-study, where the commute of 12 individuals was tracked twice, testing GPS-performance for five transport modes and two wearing modes. Second, an urban-tracking study, where one individual was tracked repeatedly through different areas, focused on the effect of building obstruction on GPS-performance. The median error from the true path for walking was 3.7 m, biking 2.9 m, train 4.8 m, bus 4.9 m, and car 3.3 m. Errors were larger in a high-rise commercial area (median error=7.1 m) compared with a low-rise residential area (median error=2.2 m). Thus, GPS-performance largely depends on the transport mode and urban built-up. Although ~85% of all errors were <10 m, almost 1% of the errors were >50 m. Modern GPS-devices are useful tools for environmental exposure assessment, but large GPS-errors might affect estimates of exposures with high spatial variability.
Total energy based flight control system
NASA Technical Reports Server (NTRS)
Lambregts, Antonius A. (Inventor)
1985-01-01
An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.
Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.
Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D
2017-06-01
The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.
Wahba, Maram A; Ashour, Amira S; Napoleon, Sameh A; Abd Elnaby, Mustafa M; Guo, Yanhui
2017-12-01
Basal cell carcinoma is one of the most common malignant skin lesions. Automated lesion identification and classification using image processing techniques is highly required to reduce the diagnosis errors. In this study, a novel technique is applied to classify skin lesion images into two classes, namely the malignant Basal cell carcinoma and the benign nevus. A hybrid combination of bi-dimensional empirical mode decomposition and gray-level difference method features is proposed after hair removal. The combined features are further classified using quadratic support vector machine (Q-SVM). The proposed system has achieved outstanding performance of 100% accuracy, sensitivity and specificity compared to other support vector machine procedures as well as with different extracted features. Basal Cell Carcinoma is effectively classified using Q-SVM with the proposed combined features.
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning
McGregor, Heather R.; Mohatarem, Ayman
2017-01-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.
Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L
2017-07-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.
Estimating alarm thresholds and the number of components in mixture distributions
NASA Astrophysics Data System (ADS)
Burr, Tom; Hamada, Michael S.
2012-09-01
Mixtures of probability distributions arise in many nuclear assay and forensic applications, including nuclear weapon detection, neutron multiplicity counting, and in solution monitoring (SM) for nuclear safeguards. SM data is increasingly used to enhance nuclear safeguards in aqueous reprocessing facilities having plutonium in solution form in many tanks. This paper provides background for mixture probability distributions and then focuses on mixtures arising in SM data. SM data can be analyzed by evaluating transfer-mode residuals defined as tank-to-tank transfer differences, and wait-mode residuals defined as changes during non-transfer modes. A previous paper investigated impacts on transfer-mode and wait-mode residuals of event marking errors which arise when the estimated start and/or stop times of tank events such as transfers are somewhat different from the true start and/or stop times. Event marking errors contribute to non-Gaussian behavior and larger variation than predicted on the basis of individual tank calibration studies. This paper illustrates evidence for mixture probability distributions arising from such event marking errors and from effects such as condensation or evaporation during non-transfer modes, and pump carryover during transfer modes. A quantitative assessment of the sample size required to adequately characterize a mixture probability distribution arising in any context is included.
Orbit determination of highly elliptical Earth orbiters using improved Doppler data-processing modes
NASA Technical Reports Server (NTRS)
Estefan, J. A.
1995-01-01
A navigation error covariance analysis of four highly elliptical Earth orbits is described, with apogee heights ranging from 20,000 to 76,800 km and perigee heights ranging from 1,000 to 5,000 km. This analysis differs from earlier studies in that improved navigation data-processing modes were used to reduce the radio metric data. For this study, X-band (8.4-GHz) Doppler data were assumed to be acquired from two Deep Space Network radio antennas and reconstructed orbit errors propagated over a single day. Doppler measurements were formulated as total-count phase measurements and compared to the traditional formulation of differenced-count frequency measurements. In addition, an enhanced data-filtering strategy was used, which treated the principal ground system calibration errors affecting the data as filter parameters. Results suggest that a 40- to 60-percent accuracy improvement may be achievable over traditional data-processing modes in reconstructed orbit errors, with a substantial reduction in reconstructed velocity errors at perigee. Historically, this has been a regime in which stringent navigation requirements have been difficult to meet by conventional methods.
Clinical decision support alert malfunctions: analysis and empirically derived taxonomy.
Wright, Adam; Ai, Angela; Ash, Joan; Wiesen, Jane F; Hickman, Thu-Trang T; Aaron, Skye; McEvoy, Dustin; Borkowsky, Shane; Dissanayake, Pavithra I; Embi, Peter; Galanter, William; Harper, Jeremy; Kassakian, Steve Z; Ramoni, Rachel; Schreiber, Richard; Sirajuddin, Anwar; Bates, David W; Sittig, Dean F
2018-05-01
To develop an empirically derived taxonomy of clinical decision support (CDS) alert malfunctions. We identified CDS alert malfunctions using a mix of qualitative and quantitative methods: (1) site visits with interviews of chief medical informatics officers, CDS developers, clinical leaders, and CDS end users; (2) surveys of chief medical informatics officers; (3) analysis of CDS firing rates; and (4) analysis of CDS overrides. We used a multi-round, manual, iterative card sort to develop a multi-axial, empirically derived taxonomy of CDS malfunctions. We analyzed 68 CDS alert malfunction cases from 14 sites across the United States with diverse electronic health record systems. Four primary axes emerged: the cause of the malfunction, its mode of discovery, when it began, and how it affected rule firing. Build errors, conceptualization errors, and the introduction of new concepts or terms were the most frequent causes. User reports were the predominant mode of discovery. Many malfunctions within our database caused rules to fire for patients for whom they should not have (false positives), but the reverse (false negatives) was also common. Across organizations and electronic health record systems, similar malfunction patterns recurred. Challenges included updates to code sets and values, software issues at the time of system upgrades, difficulties with migration of CDS content between computing environments, and the challenge of correctly conceptualizing and building CDS. CDS alert malfunctions are frequent. The empirically derived taxonomy formalizes the common recurring issues that cause these malfunctions, helping CDS developers anticipate and prevent CDS malfunctions before they occur or detect and resolve them expediently.
Autoimmunity: a decision theory model.
Morris, J A
1987-01-01
Concepts from statistical decision theory were used to analyse the detection problem faced by the body's immune system in mounting immune responses to bacteria of the normal body flora. Given that these bacteria are potentially harmful, that there can be extensive cross reaction between bacterial antigens and host tissues, and that the decisions are made in uncertainty, there is a finite chance of error in immune response leading to autoimmune disease. A model of ageing in the immune system is proposed that is based on random decay in components of the decision process, leading to a steep age dependent increase in the probability of error. The age incidence of those autoimmune diseases which peak in early and middle life can be explained as the resultant of two processes: an exponentially falling curve of incidence of first contact with common bacteria, and a rapidly rising error function. Epidemiological data on the variation of incidence with social class, sibship order, climate and culture can be used to predict the likely site of carriage and mode of spread of the causative bacteria. Furthermore, those autoimmune diseases precipitated by common viral respiratory tract infections might represent reactions to nasopharyngeal bacterial overgrowth, and this theory can be tested using monoclonal antibodies to search the bacterial isolates for cross reacting antigens. If this model is correct then prevention of autoimmune disease by early exposure to low doses of bacteria might be possible. PMID:3818985
Burkill, Sarah; Copas, Andrew; Couper, Mick P.; Clifton, Soazig; Prah, Philip; Datta, Jessica; Conrad, Frederick; Wellings, Kaye; Johnson, Anne M.; Erens, Bob
2016-01-01
Background Interviewer-administered surveys are an important method of collecting population-level epidemiological data, but suffer from declining response rates and increasing costs. Web surveys offer more rapid data collection and lower costs. There are concerns, however, about data quality from web surveys. Previous research has largely focused on selection biases, and few have explored measurement differences. This paper aims to assess the extent to which mode affects the responses given by the same respondents at two points in time, providing information on potential measurement error if web surveys are used in the future. Methods 527 participants from the third British National Survey of Sexual Attitudes and Lifestyles (Natsal-3), which uses computer assisted personal interview (CAPI) and self-interview (CASI) modes, subsequently responded to identically-worded questions in a web survey. McNemar tests assessed whether within-person differences in responses were at random or indicated a mode effect, i.e. higher reporting of more sensitive responses in one mode. An analysis of pooled responses by generalized estimating equations addressed the impact of gender and question type on change. Results Only 10% of responses changed between surveys. However mode effects were found for about a third of variables, with higher reporting of sensitive responses more commonly found on the web compared with Natsal-3. Conclusions The web appears a promising mode for surveys of sensitive behaviours, most likely as part of a mixed-mode design. Our findings suggest that mode effects may vary by question type and content, and by the particular mix of modes used. Mixed-mode surveys need careful development to understand mode effects and how to account for them. PMID:26866687
NASA Technical Reports Server (NTRS)
Colombo, Oscar L. (Editor)
1992-01-01
This symposium on space and airborne techniques for measuring gravity fields, and related theory, contains papers on gravity modeling of Mars and Venus at NASA/GSFC, an integrated laser Doppler method for measuring planetary gravity fields, observed temporal variations in the earth's gravity field from 16-year Starlette orbit analysis, high-resolution gravity models combining terrestrial and satellite data, the effect of water vapor corrections for satellite altimeter measurements of the geoid, and laboratory demonstrations of superconducting gravity and inertial sensors for space and airborne gravity measurements. Other papers are on airborne gravity measurements over the Kelvin Seamount; the accuracy of GPS-derived acceleration from moving platform tests; airborne gravimetry, altimetry, and GPS navigation errors; controlling common mode stabilization errors in airborne gravity gradiometry, GPS/INS gravity measurements in space and on a balloon, and Walsh-Fourier series expansion of the earth's gravitational potential.
What the UV SED Tells us About Stellar Populations and Galaxies
NASA Technical Reports Server (NTRS)
Heap, Sara R.
2011-01-01
The UV SED parameter b as in f(sub 1) 1(sup b), is commonly used to estimate fundamental properties of high-redshift galaxies including age and metallicity. However, sources and processes other than age and metallicity can influence the value of b. We use the local starforming dwarf galaxy, I Zw 18, in a case study to investigate uncertainties in age and metallicity inferred from b due errors or uncertainties in: mode of star formation (instantaneous starburst vs. continuous SF), dust extinction, nebular continuous emission (2-photon emission, Balmer continuum flux), and presence of older stars.
Design Consideration and Performance of Networked Narrowband Waveforms for Tactical Communications
2010-09-01
four proposed CPM modes, with perfect acquisition parameters, for both coherent and noncoherent detection using an iterative receiver with both inner...Figure 1: Bit error rate performance of various CPM modes with coherent and noncoherent detection. Figure 3 shows the corresponding relationship...symbols. Table 2 summarises the parameter Coherent results (cross) Noncoherent results (diamonds) Figur 1: Bit Error Rate Pe f rmance of
Quantitative transmission Raman spectroscopy of pharmaceutical tablets and capsules.
Johansson, Jonas; Sparén, Anders; Svensson, Olof; Folestad, Staffan; Claybourn, Mike
2007-11-01
Quantitative analysis of pharmaceutical formulations using the new approach of transmission Raman spectroscopy has been investigated. For comparison, measurements were also made in conventional backscatter mode. The experimental setup consisted of a Raman probe-based spectrometer with 785 nm excitation for measurements in backscatter mode. In transmission mode the same system was used to detect the Raman scattered light, while an external diode laser of the same type was used as excitation source. Quantitative partial least squares models were developed for both measurement modes. The results for tablets show that the prediction error for an independent test set was lower for the transmission measurements with a relative root mean square error of about 2.2% as compared with 2.9% for the backscatter mode. Furthermore, the models were simpler in the transmission case, for which only a single partial least squares (PLS) component was required to explain the variation. The main reason for the improvement using the transmission mode is a more representative sampling of the tablets compared with the backscatter mode. Capsules containing mixtures of pharmaceutical powders were also assessed by transmission only. The quantitative results for the capsules' contents were good, with a prediction error of 3.6% w/w for an independent test set. The advantage of transmission Raman over backscatter Raman spectroscopy has been demonstrated for quantitative analysis of pharmaceutical formulations, and the prospects for reliable, lean calibrations for pharmaceutical analysis is discussed.
2012-01-01
RECS relies on actual records from energy suppliers to produce robust survey estimates of household energy consumption and expenditures. During the RECS Energy Supplier Survey (ESS), energy billing records are collected from the companies that supply electricity, natural gas, fuel oil/kerosene, and propane (LPG) to the interviewed households. As Federal agencies expand the use of administrative records to enhance, replace, or evaluate survey data, EIA has explored more flexible, reliable and efficient techniques to collect energy billing records. The ESS has historically been a mail-administered survey, but EIA introduced web data collection with the 2009 RECS ESS. In that survey, energy suppliers self-selected their reporting mode among several options: standardized paper form, on-line fillable form or spreadsheet, or failing all else, a nonstandard format of their choosing. In this paper, EIA describes where reporting mode appears to influence the data quality. We detail the reporting modes, the embedded and post-hoc quality control and consistency checks that were performed, the extent of detectable errors, and the methods used for correcting data errors. We explore by mode the levels of unit and item nonresponse, number of errors, and corrections made to the data. In summary, we find notable differences in data quality between modes and analyze where the benefits of offering these new modes outweigh the "costs".
Control by model error estimation
NASA Technical Reports Server (NTRS)
Likins, P. W.; Skelton, R. E.
1976-01-01
Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).
Method and system for reducing errors in vehicle weighing systems
Hively, Lee M.; Abercrombie, Robert K.
2010-08-24
A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.
NASA Astrophysics Data System (ADS)
Lott, J. A.; Shchukin, V. A.; Ledentsov, N. N.; Stinz, A.; Hopfer, F.; Mutig, A.; Fiol, G.; Bimberg, D.; Blokhin, S. A.; Karachinsky, L. Y.; Novikov, I. I.; Maximov, M. V.; Zakharov, N. D.; Werner, P.
2009-02-01
We report on the modeling, epitaxial growth, fabrication, and characterization of 830-845 nm vertical cavity surface emitting lasers (VCSELs) that employ InAs-GaAs quantum dot (QD) gain elements. The GaAs-based VCSELs are essentially conventional in design, grown by solid-source molecular beam epitaxy, and include top and bottom gradedheterointerface AlGaAs distributed Bragg reflectors, a single selectively-oxidized AlAs waveguiding/current funneling aperture layer, and a quasi-antiwaveguiding microcavity. The active region consists of three sheets of InAs-GaAs submonolayer insertions separated by AlGaAs matrix layers. Compared to QWs the InAs-GaAs insertions are expected to offer higher exciton-dominated modal gain and improved carrier capture and retention, thus resulting in superior temperature stability and resilience to degradation caused by operating at the larger switching currents commonly employed to increase the data rates of modern optical communication systems. We investigate the robustness and temperature performance of our QD VCSEL design by fabricating prototype devices in a high-frequency ground-sourceground contact pad configuration suitable for on-wafer probing. Arrays of VCSELs are produced with precise variations in top mesa diameter from 24 to 36 μm and oxide aperture diameter from 1 to 12 μm resulting in VCSELs that operate in full single-mode, single-mode to multi-mode, and full multi-mode regimes. The single-mode QD VCSELs have room temperature threshold currents below 0.5 mA and peak output powers near 1 mW, whereas the corresponding values for full multi-mode devices range from about 0.5 to 1.5 mA and 2.5 to 5 mW. At 20°C we observe optical transmission at 20 Gb/s through 150 m of OM3 fiber with a bit error ratio better than 10-12, thus demonstrating the great potential of our QD VCSELs for applications in next-generation short-distance optical data communications and interconnect systems.
Physics and Control of Locked Modes in the DIII-D Tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volpe, Francesco
This Final Technical Report summarizes an investigation, carried out under the auspices of the DOE Early Career Award, of the physics and control of non-rotating magnetic islands (“locked modes”) in tokamak plasmas. Locked modes are one of the main causes of disruptions in present tokamaks, and could be an even bigger concern in ITER, due to its relatively high beta (favoring the formation of Neoclassical Tearing Mode islands) and low rotation (favoring locking). For these reasons, this research had the goal of studying and learning how to control locked modes in the DIII-D National Fusion Facility under ITER-relevant conditions ofmore » high pressure and low rotation. Major results included: the first full suppression of locked modes and avoidance of the associated disruptions; the demonstration of error field detection from the interaction between locked modes, applied rotating fields and intrinsic errors; the analysis of a vast database of disruptive locked modes, which led to criteria for disruption prediction and avoidance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, M. P.; Centre for Quantum Technologies, National University of Singapore; QuTech, Delft University of Technology, Lorentzweg 1, 2611 CJ Delft
2016-02-15
Instances of discrete quantum systems coupled to a continuum of oscillators are ubiquitous in physics. Often the continua are approximated by a discrete set of modes. We derive error bounds on expectation values of system observables that have been time evolved under such discretised Hamiltonians. These bounds take on the form of a function of time and the number of discrete modes, where the discrete modes are chosen according to Gauss quadrature rules. The derivation makes use of tools from the field of Lieb-Robinson bounds and the theory of orthonormal polynomials.
Feedback stabilization system for pulsed single longitudinal mode tunable lasers
Esherick, Peter; Raymond, Thomas D.
1991-10-01
A feedback stabilization system for pulse single longitudinal mode tunable lasers having an excited laser medium contained within an adjustable length cavity and producing a laser beam through the use of an internal dispersive element, including detection of angular deviation in the output laser beam resulting from detuning between the cavity mode frequency and the passband of the internal dispersive element, and generating an error signal based thereon. The error signal can be integrated and amplified and then applied as a correcting signal to a piezoelectric transducer mounted on a mirror of the laser cavity for controlling the cavity length.
Effects of dynamic aeroelasticity on handling qualities and pilot rating
NASA Technical Reports Server (NTRS)
Swaim, R. L.; Yen, W.-Y.
1978-01-01
Pilot performance parameters, such as pilot ratings, tracking errors, and pilot comments, were recorded and analyzed for a longitudinal pitch tracking task on a large, flexible aircraft. The tracking task was programmed on a fixed-base simulator with a CRT attitude director display of pitch angle command, pitch angle, and pitch angle error. Parametric variations in the undamped natural frequencies of the two lowest frequency symmetric elastic modes were made to induce varying degrees of rigid body and elastic mode interaction. The results indicate that such mode interaction can drastically affect the handling qualities and pilot ratings of the task.
Channel estimation in few mode fiber mode division multiplexing transmission system
NASA Astrophysics Data System (ADS)
Hei, Yongqiang; Li, Li; Li, Wentao; Li, Xiaohui; Shi, Guangming
2018-03-01
It is abundantly clear that obtaining the channel state information (CSI) is of great importance for the equalization and detection in coherence receivers. However, to the best of the authors' knowledge, in most of the existing literatures, CSI is assumed to be perfectly known at the receiver. So far, few literature discusses the effects of imperfect CSI on MDM system performance caused by channel estimation. Motivated by that, in this paper, the channel estimation in few mode fiber (FMF) mode division multiplexing (MDM) system is investigated, in which two classical channel estimation methods, i.e., least square (LS) method and minimum mean square error (MMSE) method, are discussed with the assumption of the spatially white noise lumped at the receiver side of MDM system. Both the capacity and BER performance of MDM system affected by mode-dependent gain or loss (MDL) with different channel estimation errors have been studied. Simulation results show that the capacity and BER performance can be further deteriorated in MDM system by the channel estimation, and an 1e-3 variance of channel estimation error is acceptable in MDM system with 0-6 dB MDL values.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w r in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w r/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including themore » effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
Creating and evaluating a data-driven curriculum for central venous catheter placement.
Duncan, James R; Henderson, Katherine; Street, Mandie; Richmond, Amy; Klingensmith, Mary; Beta, Elio; Vannucci, Andrea; Murray, David
2010-09-01
Central venous catheter placement is a common procedure with a high incidence of error. Other fields requiring high reliability have used Failure Mode and Effects Analysis (FMEA) to prioritize quality and safety improvement efforts. To use FMEA in the development of a formal, standardized curriculum for central venous catheter training. We surveyed interns regarding their prior experience with central venous catheter placement. A multidisciplinary team used FMEA to identify high-priority failure modes and to develop online and hands-on training modules to decrease the frequency, diminish the severity, and improve the early detection of these failure modes. We required new interns to complete the modules and tracked their progress using multiple assessments. Survey results showed new interns had little prior experience with central venous catheter placement. Using FMEA, we created a curriculum that focused on planning and execution skills and identified 3 priority topics: (1) retained guidewires, which led to training on handling catheters and guidewires; (2) improved needle access, which prompted the development of an ultrasound training module; and (3) catheter-associated bloodstream infections, which were addressed through training on maximum sterile barriers. Each module included assessments that measured progress toward recognition and avoidance of common failure modes. Since introducing this curriculum, the number of retained guidewires has fallen more than 4-fold. Rates of catheter-associated infections have not yet declined, and it will take time before ultrasound training will have a measurable effect. The FMEA provided a process for curriculum development. Precise definitions of failure modes for retained guidewires facilitated development of a curriculum that contributed to a dramatic decrease in the frequency of this complication. Although infections and access complications have not yet declined, failure mode identification, curriculum development, and monitored implementation show substantial promise for improving patient safety during placement of central venous catheters.
Adams, Elizabeth J.; Jordan, Thomas J.; Clark, Catharine H.; Nisbet, Andrew
2013-01-01
Quality assurance (QA) for intensity‐ and volumetric‐modulated radiotherapy (IMRT and VMAT) has evolved substantially. In recent years, various commercial 2D and 3D ionization chamber or diode detector arrays have become available, allowing for absolute verification with near real time results, allowing for streamlined QA. However, detector arrays are limited by their resolution, giving rise to concerns about their sensitivity to errors. Understanding the limitations of these devices is therefore critical. In this study, the sensitivity and resolution of the PTW 2D‐ARRAY seven29 and OCTAVIUS II phantom combination was comprehensively characterized for use in dynamic sliding window IMRT and RapidArc verification. Measurement comparisons were made between single acquisition and a multiple merged acquisition techniques to improve the effective resolution of the 2D‐ARRAY, as well as comparisons against GAFCHROMIC EBT2 film and electronic portal imaging dosimetry (EPID). The sensitivity and resolution of the 2D‐ARRAY was tested using two gantry angle 0° modulated test fields. Deliberate multileaf collimator (MLC) errors of 1, 2, and 5 mm and collimator rotation errors were inserted into IMRT and RapidArc plans for pelvis and head & neck sites, to test sensitivity to errors. The radiobiological impact of these errors was assessed to determine the gamma index passing criteria to be used with the 2D‐ARRAY to detect clinically relevant errors. For gamma index distributions, it was found that the 2D‐ARRAY in single acquisition mode was comparable to multiple acquisition modes, as well as film and EPID. It was found that the commonly used gamma index criteria of 3% dose difference or 3 mm distance to agreement may potentially mask clinically relevant errors. Gamma index criteria of 3%/2 mm with a passing threshold of 98%, or 2%/2 mm with a passing threshold of 95%, were found to be more sensitive. We suggest that the gamma index passing thresholds may be used for guidance, but also should be combined with a visual inspection of the gamma index distribution and calculation of the dose difference to assess whether there may be a clinical impact in failed regions. PACS numbers: 87.55.Qr, 87.56.Fc PMID:24257288
A strategy for reducing gross errors in the generalized Born models of implicit solvation
Onufriev, Alexey V.; Sigalov, Grigori
2011-01-01
The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947
Superconducting gravity gradiometer for sensitive gravity measurements. I. Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, H.A.; Paik, H.J.
1987-06-15
Because of the equivalence principle, a global measurement is necessary to distinguish gravity from acceleration of the reference frame. A gravity gradiometer is therefore an essential instrument needed for precision tests of gravity laws and for applications in gravity survey and inertial navigation. Superconductivity and SQUID (superconducting quantum interference device) technology can be used to obtain a gravity gradiometer with very high sensitivity and stability. A superconducting gravity gradiometer has been developed for a null test of the gravitational inverse-square law and space-borne geodesy. Here we present a complete theoretical model of this instrument. Starting from dynamical equations for themore » device, we derive transfer functions, a common mode rejection characteristic, and an error model of the superconducting instrument. Since a gradiometer must detect a very weak differential gravity signal in the midst of large platform accelerations and other environmental disturbances, the scale factor and common mode rejection stability of the instrument are extremely important in addition to its immunity to temperature and electromagnetic fluctuations. We show how flux quantization, the Meissner effect, and properties of liquid helium can be utilized to meet these challenges.« less
Experimental Analysis of Dampened Breathing Mode Oscillation on Hall Thruster Performance
2013-03-01
38 4.5 Analysis of Discharge RMS Effect on Breathing Mode Amplitude...20 xii EXPERIMENTAL ANALYSIS OF DAMPENED BREATHING MODE OSCILLATION ON HALL EFFECT THRUSTER...the large error in the data presented above prevents many conclusions from being drawn. 4.5 Analysis of Discharge RMS Effect on Breathing Mode
Measuring a Fiber-Optic Delay Line Using a Mode-Locked Laser
NASA Technical Reports Server (NTRS)
Tu, Meirong; McKee, Michael R.; Pak, Kyung S.; Yu, Nan
2010-01-01
The figure schematically depicts a laboratory setup for determining the optical length of a fiber-optic delay line at a precision greater than that obtainable by use of optical time-domain reflectometry or of mechanical measurement of length during the delay-line-winding process. In this setup, the delay line becomes part of the resonant optical cavity that governs the frequency of oscillation of a mode-locked laser. The length can then be determined from frequency-domain measurements, as described below. The laboratory setup is basically an all-fiber ring laser in which the delay line constitutes part of the ring. Another part of the ring - the laser gain medium - is an erbium-doped fiber amplifier pumped by a diode laser at a wavelength of 980 nm. The loop also includes an optical isolator, two polarization controllers, and a polarizing beam splitter. The optical isolator enforces unidirectional lasing. The polarization beam splitter allows light in only one polarization mode to pass through the ring; light in the orthogonal polarization mode is rejected from the ring and utilized as a diagnostic output, which is fed to an optical spectrum analyzer and a photodetector. The photodetector output is fed to a radio-frequency spectrum analyzer and an oscilloscope. The fiber ring laser can generate continuous-wave radiation in non-mode-locked operation or ultrashort optical pulses in mode-locked operation. The mode-locked operation exhibited by this ring is said to be passive in the sense that no electro-optical modulator or other active optical component is used to achieve it. Passive mode locking is achieved by exploiting optical nonlinearity of passive components in such a manner as to obtain ultra-short optical pulses. In this setup, the particular nonlinear optical property exploited to achieve passive mode locking is nonlinear polarization rotation. This or any ring laser can support oscillation in multiple modes as long as sufficient gain is present to overcome losses in the ring. When mode locking is achieved, oscillation occurs in all the modes having the same phase and same polarization. The frequency interval between modes, often denoted the free spectral range (FSR), is given by c/nL, where c is the speed of light in vacuum, n is the effective index of refraction of the fiber, and L is the total length of optical path around the ring. Therefore, the length of the fiber-optic delay line, as part of the length around the ring, can be calculated from the FSRs measured with and without the delay line incorporated into the ring. For this purpose, the FSR measurements are made by use of the optical and radio-frequency spectrum analyzers. In experimentation on a 10-km-long fiber-optic delay line, it was found that this setup made it possible to measure the length to within a fractional error of about 3 10(exp -6), corresponding to a length error of 3 cm. In contrast, measurements by optical time-domain reflectometry and mechanical measurement were found to be much less precise: For optical time-domain reflectometry, the fractional error was found no less than 10(exp -4) (corresponding to a length error of 1 m) and for mechanical measurement, the fractional error was found to be about 10(exp -2) (corresponding to a length error of 100 m).
An a priori solar radiation pressure model for the QZSS Michibiki satellite
NASA Astrophysics Data System (ADS)
Zhao, Qile; Chen, Guo; Guo, Jing; Liu, Jingnan; Liu, Xianglin
2018-02-01
It has been noted that the satellite laser ranging (SLR) residuals of the Quasi-Zenith Satellite System (QZSS) Michibiki satellite orbits show very marked dependence on the elevation angle of the Sun above the orbital plane (i.e., the β angle). It is well recognized that the systematic error is caused by mismodeling of the solar radiation pressure (SRP). Although the error can be reduced by the updated ECOM SRP model, the orbit error is still very large when the satellite switches to orbit-normal (ON) orientation. In this study, an a priori SRP model was established for the QZSS Michibiki satellite to enhance the ECOM model. This model is expressed in ECOM's D, Y, and B axes (DYB) using seven parameters for the yaw-steering (YS) mode, and additional three parameters are used to compensate the remaining modeling deficiencies, particularly the perturbations in the Y axis, based on a redefined DYB for the ON mode. With the proposed a priori model, QZSS Michibiki's precise orbits over 21 months were determined. SLR validation indicated that the systematic β -angle-dependent error was reduced when the satellite was in the YS mode, and better than an 8-cm root mean square (RMS) was achieved. More importantly, the orbit quality was also improved significantly when the satellite was in the ON mode. Relative to ECOM and adjustable box-wing model, the proposed SRP model showed the best performance in the ON mode, and the RMS of the SLR residuals was better than 15 cm, which was a two times improvement over the ECOM without a priori model used, but was still two times worse than the YS mode.
Carrier recovery methods for a dual-mode modem: A design approach
NASA Technical Reports Server (NTRS)
Richards, C. W.; Wilson, S. G.
1984-01-01
A dual mode model with selectable QPSK or 16-QASK modulation schemes is discussed. The theoretical reasoning as well as the practical trade-offs made during the development of a modem are presented, with attention given to the carrier recovery method used for coherent demodulation. Particular attention is given to carrier recovery methods that can provide little degradation due to phase error for both QPSK and 16-QASK, while being insensitive to the amplitude characteristic of a 16-QASK modulation scheme. A computer analysis of the degradation is symbol error rate (SER) for QPSK and 16-QASK due to phase error is prresented. Results find that an energy increase of roughly 4 dB is needed to maintain a SER of 1X10(-5) for QPSK with 20 deg of phase error and 16-QASK with 7 deg phase error.
Wedell, Douglas H; Moro, Rodrigo
2008-04-01
Two experiments used within-subject designs to examine how conjunction errors depend on the use of (1) choice versus estimation tasks, (2) probability versus frequency language, and (3) conjunctions of two likely events versus conjunctions of likely and unlikely events. All problems included a three-option format verified to minimize misinterpretation of the base event. In both experiments, conjunction errors were reduced when likely events were conjoined. Conjunction errors were also reduced for estimations compared with choices, with this reduction greater for likely conjuncts, an interaction effect. Shifting conceptual focus from probabilities to frequencies did not affect conjunction error rates. Analyses of numerical estimates for a subset of the problems provided support for the use of three general models by participants for generating estimates. Strikingly, the order in which the two tasks were carried out did not affect the pattern of results, supporting the idea that the mode of responding strongly determines the mode of thinking about conjunctions and hence the occurrence of the conjunction fallacy. These findings were evaluated in terms of implications for rationality of human judgment and reasoning.
Finite-time control for nonlinear spacecraft attitude based on terminal sliding mode technique.
Song, Zhankui; Li, Hongxing; Sun, Kaibiao
2014-01-01
In this paper, a fast terminal sliding mode control (FTSMC) scheme with double closed loops is proposed for the spacecraft attitude control. The FTSMC laws are included both in an inner control loop and an outer control loop. Firstly, a fast terminal sliding surface (FTSS) is constructed, which can drive the inner loop tracking-error and the outer loop tracking-error on the FTSS to converge to zero in finite time. Secondly, FTSMC strategy is designed by using Lyaponov's method for ensuring the occurrence of the sliding motion in finite time, which can hold the character of fast transient response and improve the tracking accuracy. It is proved that FTSMC can guarantee the convergence of tracking-error in both approaching and sliding mode surface. Finally, simulation results demonstrate the effectiveness of the proposed control scheme. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Tourism forecasting using modified empirical mode decomposition and group method of data handling
NASA Astrophysics Data System (ADS)
Yahya, N. A.; Samsudin, R.; Shabri, A.
2017-09-01
In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.
Giardina, M; Castiglia, F; Tomarchio, E
2014-12-01
Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.
New class of photonic quantum error correction codes
NASA Astrophysics Data System (ADS)
Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.
We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.
Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong
2016-01-01
This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469
Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong
2016-10-16
This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions.
A Modular Low-Complexity ECG Delineation Algorithm for Real-Time Embedded Systems.
Bote, Jose Manuel; Recas, Joaquin; Rincon, Francisco; Atienza, David; Hermida, Roman
2018-03-01
This work presents a new modular and low-complexity algorithm for the delineation of the different ECG waves (QRS, P and T peaks, onsets, and end). Involving a reduced number of operations per second and having a small memory footprint, this algorithm is intended to perform real-time delineation on resource-constrained embedded systems. The modular design allows the algorithm to automatically adjust the delineation quality in runtime to a wide range of modes and sampling rates, from a ultralow-power mode when no arrhythmia is detected, in which the ECG is sampled at low frequency, to a complete high-accuracy delineation mode, in which the ECG is sampled at high frequency and all the ECG fiducial points are detected, in the case of arrhythmia. The delineation algorithm has been adjusted using the QT database, providing very high sensitivity and positive predictivity, and validated with the MIT database. The errors in the delineation of all the fiducial points are below the tolerances given by the Common Standards for Electrocardiography Committee in the high-accuracy mode, except for the P wave onset, for which the algorithm is above the agreed tolerances by only a fraction of the sample duration. The computational load for the ultralow-power 8-MHz TI MSP430 series microcontroller ranges from 0.2% to 8.5% according to the mode used.
NASA Astrophysics Data System (ADS)
Volpe, F. A.; Frassinetti, L.; Brunsell, P. R.; Drake, J. R.; Olofsson, K. E. J.
2012-10-01
A new ITER-relevant non-disruptive error field (EF) assessment technique not restricted to low density and thus low beta was demonstrated at the Extrap-T2R reversed field pinch. Resistive Wall Modes (RWMs) were generated and their rotation sustained by rotating magnetic perturbations. In particular, stable modes of toroidal mode number n=8 and 10 and unstable modes of n=1 were used in this experiment. Due to finite EFs, and in spite of the applied perturbations rotating uniformly and having constant amplitude, the RWMs were observed to rotate non-uniformly and be modulated in amplitude (in the case of unstable modes, the observed oscillation was superimposed to the mode growth). This behavior was used to infer the amplitude and toroidal phase of n=1, 8 and 10 EFs. The method was first tested against known, deliberately applied EFs, and then against actual intrinsic EFs. Applying equal and opposite corrections resulted in longer discharges and more uniform mode rotation, indicating good EF compensation. The results agree with a simple theoretical model. Extensions to tearing modes, to the non-uniform plasma response to rotating perturbations, and to tokamaks, including ITER, will be discussed.
Karimi, D; Mondor, T A; Mann, D D
2008-01-01
The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the subjects preferred the combination of visual mode for the steering task and auditory mode for the monitoring task.
Determining the refractive index and thickness of thin films from prism coupler measurements
NASA Technical Reports Server (NTRS)
Kirsch, S. T.
1981-01-01
A simple method of determining thin film parameters from mode indices measured using a prism coupler is described. The problem is reduced to doing two least squares straight line fits through measured mode indices vs effective mode number. The slope and y intercept of the line are simply related to the thickness and refractive index of film, respectively. The approach takes into account the correlation between as well as the uncertainty in the individual measurements from all sources of error to give precise error tolerances on the best fit values. Due to the precision of the tolerances, anisotropic films can be identified and characterized.
NASA Technical Reports Server (NTRS)
Barrie, A. C.; Smith, S. E.; Dorelli, J. C.; Gershman, D. J.; Yeh, P.; Schiff, C.; Avanov, L. A.
2017-01-01
Data compression has been a staple of imaging instruments for years. Recently, plasma measurements have utilized compression with relatively low compression ratios. The Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale (MMS) mission generates data roughly 100 times faster than previous plasma instruments, requiring a higher compression ratio to fit within the telemetry allocation. This study investigates the performance of a space-based compression standard employing a Discrete Wavelet Transform and a Bit Plane Encoder (DWT/BPE) in compressing FPI plasma count data. Data from the first 6 months of FPI operation are analyzed to explore the error modes evident in the data and how to adapt to them. While approximately half of the Dual Electron Spectrometer (DES) maps had some level of loss, it was found that there is little effect on the plasma moments and that errors present in individual sky maps are typically minor. The majority of Dual Ion Spectrometer burst sky maps compressed in a lossless fashion, with no error introduced during compression. Because of induced compression error, the size limit for DES burst images has been increased for Phase 1B. Additionally, it was found that the floating point compression mode yielded better results when images have significant compression error, leading to floating point mode being used for the fast survey mode of operation for Phase 1B. Despite the suggested tweaks, it was found that wavelet-based compression, and a DWT/BPE algorithm in particular, is highly suitable to data compression for plasma measurement instruments and can be recommended for future missions.
Multisite Parent-Centered Risk Assessment to Reduce Pediatric Oral Chemotherapy Errors
Walsh, Kathleen E.; Mazor, Kathleen M.; Roblin, Douglas; Biggins, Colleen; Wagner, Joann L.; Houlahan, Kathleen; Li, Justin W.; Keuker, Christopher; Wasilewski-Masker, Karen; Donovan, Jennifer; Kanaan, Abir; Weingart, Saul N.
2013-01-01
Purpose: Observational studies describe high rates of errors in home oral chemotherapy use in children. In hospitals, proactive risk assessment methods help front-line health care workers develop error prevention strategies. Our objective was to engage parents of children with cancer in a multisite study using proactive risk assessment methods to identify how errors occur at home and propose risk reduction strategies. Methods: We recruited parents from three outpatient pediatric oncology clinics in the northeast and southeast United States to participate in failure mode and effects analyses (FMEA). An FMEA is a systematic team-based proactive risk assessment approach in understanding ways a process can fail and develop prevention strategies. Steps included diagram the process, brainstorm and prioritize failure modes (places where things go wrong), and propose risk reduction strategies. We focused on home oral chemotherapy administration after a change in dose because prior studies identified this area as high risk. Results: Parent teams consisted of four parents at two of the sites and 10 at the third. Parents developed a 13-step process map, with two to 19 failure modes per step. The highest priority failure modes included miscommunication when receiving instructions from the clinician (caused by conflicting instructions or parent lapses) and unsafe chemotherapy handling at home. Recommended risk assessment strategies included novel uses of technology to improve parent access to information, clinicians, and other parents while at home. Conclusion: Parents of pediatric oncology patients readily participated in a proactive risk assessment method, identifying processes that pose a risk for medication errors involving home oral chemotherapy. PMID:23633976
Stitching-error reduction in gratings by shot-shifted electron-beam lithography
NASA Technical Reports Server (NTRS)
Dougherty, D. J.; Muller, R. E.; Maker, P. D.; Forouhar, S.
2001-01-01
Calculations of the grating spatial-frequency spectrum and the filtering properties of multiple-pass electron-beam writing demonstrate a tradeoff between stitching-error suppression and minimum pitch separation. High-resolution measurements of optical-diffraction patterns show a 25-dB reduction in stitching-error side modes.
Generation of Higher Order Modes in a Rectangular Duct
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.; Cabell, Randolph H.; Brown, Donald E.
2004-01-01
Advanced noise control methodologies to reduce sound emission from aircraft engines take advantage of the modal structure of the noise in the duct. This noise is caused by the interaction of rotor wakes with downstream obstructions such as exit guide vanes. Mode synthesis has been accomplished in circular ducts and current active noise control work has made use of this capability to cancel fan noise. The goal of the current effort is to examine the fundamental process of higher order mode propagation through an acoustically treated, curved duct. The duct cross-section is rectangular to permit greater flexibility in representation of a range of duct curvatures. The work presented is the development of a feedforward control system to generate a user-specified modal pattern in the duct. The multiple-error, filtered-x LMS algorithm is used to determine the magnitude and phase of signal input to the loudspeakers to produce a desired modal pattern at a set of error microphones. Implementation issues, including loudspeaker placement and error microphone placement, are discussed. Preliminary results from a 9-3/8 inch by 21 inch duct, using 12 loudspeakers and 24 microphones, are presented. These results demonstrate the ability of the control system to generate a user-specified mode while suppressing undesired modes.
Autonomous Control Modes and Optimized Path Guidance for Shipboard Landing in High Sea States
2015-11-16
a degraded visual environment, workload during the landing task begins to approach the limits of a human pilot’s capability. It is a similarly...Figure 2. Approach Trajectory ±4 ft landing error ±8 ft landing error ±12 ft landing error Flight Path -3000...heave and yaw axes. Figure 5. Open loop system generation ±4 ft landing error ±8 ft landing error ±12 ft landing error -10 -8 -6 -4 -2 0 2 4
Use of failure mode effect analysis (FMEA) to improve medication management process.
Jain, Khushboo
2017-03-13
Purpose Medication management is a complex process, at high risk of error with life threatening consequences. The focus should be on devising strategies to avoid errors and make the process self-reliable by ensuring prevention of errors and/or error detection at subsequent stages. The purpose of this paper is to use failure mode effect analysis (FMEA), a systematic proactive tool, to identify the likelihood and the causes for the process to fail at various steps and prioritise them to devise risk reduction strategies to improve patient safety. Design/methodology/approach The study was designed as an observational analytical study of medication management process in the inpatient area of a multi-speciality hospital in Gurgaon, Haryana, India. A team was made to study the complex process of medication management in the hospital. FMEA tool was used. Corrective actions were developed based on the prioritised failure modes which were implemented and monitored. Findings The percentage distribution of medication errors as per the observation made by the team was found to be maximum of transcription errors (37 per cent) followed by administration errors (29 per cent) indicating the need to identify the causes and effects of their occurrence. In all, 11 failure modes were identified out of which major five were prioritised based on the risk priority number (RPN). The process was repeated after corrective actions were taken which resulted in about 40 per cent (average) and around 60 per cent reduction in the RPN of prioritised failure modes. Research limitations/implications FMEA is a time consuming process and requires a multidisciplinary team which has good understanding of the process being analysed. FMEA only helps in identifying the possibilities of a process to fail, it does not eliminate them, additional efforts are required to develop action plans and implement them. Frank discussion and agreement among the team members is required not only for successfully conducing FMEA but also for implementing the corrective actions. Practical implications FMEA is an effective proactive risk-assessment tool and is a continuous process which can be continued in phases. The corrective actions taken resulted in reduction in RPN, subjected to further evaluation and usage by others depending on the facility type. Originality/value The application of the tool helped the hospital in identifying failures in medication management process, thereby prioritising and correcting them leading to improvement.
How important is mode-coupling in global surface wave tomography?
NASA Astrophysics Data System (ADS)
Mikesell, Dylan; Nolet, Guust; Voronin, Sergey; Ritsema, Jeroen; Van Heijst, Hendrik-Jan
2016-04-01
To investigate the influence of mode coupling for fundamental mode Rayleigh waves with periods between 64 and 174s, we analysed 3,505,902 phase measurements obtained along minor arc trajectories as well as 2,163,474 phases along major arcs. This is a selection of five frequency bands from the data set of Van Heijst and Woodhouse, extended with more recent earthquakes, that served to define upper mantle S velocity in model S40RTS. Since accurate estimation of the misfits (as represented by χ2) is essential, we used the method of Voronin et al. (GJI 199:276, 2014) to obtain objective estimates of the standard errors in this data set. We adapted Voronin's method slightly to avoid that systematic errors along clusters of raypaths can be accommodated by source corrections. This was done by simultaneously analysing multiple clusters of raypaths originating from the same group of earthquakes but traveling in different directions. For the minor arc data, phase errors at the one sigma level range from 0.26 rad at a period of 174s to 0.89 rad at 64s. For the major arcs, these errors are roughly twice as high (0.40 and 2.09 rad, respectively). In the subsequent inversion we removed any outliers that could not be fitted at the 3 sigma level in an almost undamped inversion. Using these error estimates and the theory of finite-frequency tomography to include the effects of scattering, we solved for models with χ2 = N (the number of data) both including and excluding the effect of mode coupling between Love and Rayleigh waves. We shall present some dramatic differences between the two models, notably near ocean-continent boundaries (e.g. California) where mode conversions are likely to be largest. But a sharpening of other features, such as cratons and high-velocity blobs in the oceanic domain, is also observed when mode coupling is taken into account. An investigation of the influence of coupling on azimuthal anisotropy is still under way at the time of writing of this abstract, but the results of this will be included in the presentation.
Human factors process failure modes and effects analysis (HF PFMEA) software tool
NASA Technical Reports Server (NTRS)
Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)
2011-01-01
Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.
"First, know thyself": cognition and error in medicine.
Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo
2016-04-01
Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.
NASA Astrophysics Data System (ADS)
Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-06-01
Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.
Simulating the effect of non-linear mode coupling in cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Kiessling, A.; Taylor, A. N.; Heavens, A. F.
2011-09-01
Fisher Information Matrix methods are commonly used in cosmology to estimate the accuracy that cosmological parameters can be measured with a given experiment and to optimize the design of experiments. However, the standard approach usually assumes both data and parameter estimates are Gaussian-distributed. Further, for survey forecasts and optimization it is usually assumed that the power-spectrum covariance matrix is diagonal in Fourier space. However, in the low-redshift Universe, non-linear mode coupling will tend to correlate small-scale power, moving information from lower to higher order moments of the field. This movement of information will change the predictions of cosmological parameter accuracy. In this paper we quantify this loss of information by comparing naïve Gaussian Fisher matrix forecasts with a maximum likelihood parameter estimation analysis of a suite of mock weak lensing catalogues derived from N-body simulations, based on the SUNGLASS pipeline, for a 2D and tomographic shear analysis of a Euclid-like survey. In both cases, we find that the 68 per cent confidence area of the Ωm-σ8 plane increases by a factor of 5. However, the marginal errors increase by just 20-40 per cent. We propose a new method to model the effects of non-linear shear-power mode coupling in the Fisher matrix by approximating the shear-power distribution as a multivariate Gaussian with a covariance matrix derived from the mock weak lensing survey. We find that this approximation can reproduce the 68 per cent confidence regions of the full maximum likelihood analysis in the Ωm-σ8 plane to high accuracy for both 2D and tomographic weak lensing surveys. Finally, we perform a multiparameter analysis of Ωm, σ8, h, ns, w0 and wa to compare the Gaussian and non-linear mode-coupled Fisher matrix contours. The 6D volume of the 1σ error contours for the non-linear Fisher analysis is a factor of 3 larger than for the Gaussian case, and the shape of the 68 per cent confidence volume is modified. We propose that future Fisher matrix estimates of cosmological parameter accuracies should include mode-coupling effects.
Synthetic aperture imaging in ultrasound calibration
NASA Astrophysics Data System (ADS)
Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.
2014-03-01
Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.
NASA Astrophysics Data System (ADS)
Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang
2018-03-01
In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.
Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.
2017-08-01
The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.
NASA Astrophysics Data System (ADS)
Glück, Martin; Pott, Jörg-Uwe; Sawodny, Oliver
2017-06-01
Adaptive Optics (AO) systems in large telescopes do not only correct atmospheric phase disturbances, but they also telescope structure vibrations induced by wind or telescope motions. Often the additional wavefront error due to mirror vibrations can dominate the disturbance power and contribute significantly to the total tip-tilt Zernike mode error budget. Presently, these vibrations are compensated for by common feedback control laws. However, when observing faint natural guide stars (NGS) at reduced control bandwidth, high-frequency vibrations (>5 Hz) cannot be fully compensated for by feedback control. In this paper, we present an additional accelerometer-based disturbance feedforward control (DFF), which is independent of the NGS wavefront sensor exposure time to enlarge the “effective servo bandwidth”. The DFF is studied in a realistic AO end-to-end simulation and compared with commonly used suppression concepts. For the observation in the faint (>13 mag) NGS regime, we obtain a Strehl ratio by a factor of two to four larger in comparison with a classical feedback control. The simulation realism is verified with real measurement data from the Large Binocular Telescope (LBT); the application for on-sky testing at the LBT and an implementation at the E-ELT in the MICADO instrument is discussed.
NASA Technical Reports Server (NTRS)
Ballabrera-Poy, J.; Busalacchi, A.; Murtugudde, R.
2000-01-01
A reduced order Kalman Filter, based on a simplification of the Singular Evolutive Extended Kalman (SEEK) filter equations, is used to assimilate observed fields of the surface wind stress, sea surface temperature and sea level into the nonlinear coupled ocean-atmosphere model of Zebiak and Cane. The SEEK filter projects the Kalman Filter equations onto a subspace defined by the eigenvalue decomposition of the error forecast matrix, allowing its application to high dimensional systems. The Zebiak and Cane model couples a linear reduced gravity ocean model with a single vertical mode atmospheric model of Zebiak. The compatibility between the simplified physics of the model and each observed variable is studied separately and together. The results show the ability of the model to represent the simultaneous value of the wind stress, SST and sea level, when the fields are limited to the latitude band 10 deg S - 10 deg N In this first application of the Kalman Filter to a coupled ocean-atmosphere prediction model, the sea level fields are assimilated in terms of the Kelvin and Rossby modes of the thermocline depth anomaly. An estimation of the error of these modes is derived from the projection of an estimation of the sea level error over such modes. This method gives a value of 12 for the error of the Kelvin amplitude, and 6 m of error for the Rossby component of the thermocline depth. The ability of the method to reconstruct the state of the equatorial Pacific and predict its time evolution is demonstrated. The method is shown to be quite robust for predictions up to six months, and able to predict the onset of the 1997 warm event fifteen months before its occurrence.
NASA Technical Reports Server (NTRS)
Ballabrera-Poy, Joaquim; Busalacchi, Antonio J.; Murtugudde, Ragu
2000-01-01
A reduced order Kalman Filter, based on a simplification of the Singular Evolutive Extended Kalman (SEEK) filter equations, is used to assimilate observed fields of the surface wind stress, sea surface temperature and sea level into the nonlinear coupled ocean-atmosphere model. The SEEK filter projects the Kalman Filter equations onto a subspace defined by the eigenvalue decomposition of the error forecast matrix, allowing its application to high dimensional systems. The Zebiak and Cane model couples a linear reduced gravity ocean model with a single vertical mode atmospheric model of Zebiak. The compatibility between the simplified physics of the model and each observed variable is studied separately and together. The results show the ability of the model to represent the simultaneous value of the wind stress, SST and sea level, when the fields are limited to the latitude band 10 deg S - 10 deg N. In this first application of the Kalman Filter to a coupled ocean-atmosphere prediction model, the sea level fields are assimilated in terms of the Kelvin and Rossby modes of the thermocline depth anomaly. An estimation of the error of these modes is derived from the projection of an estimation of the sea level error over such modes. This method gives a value of 12 for the error of the Kelvin amplitude, and 6 m of error for the Rossby component of the thermocline depth. The ability of the method to reconstruct the state of the equatorial Pacific and predict its time evolution is demonstrated. The method is shown to be quite robust for predictions I up to six months, and able to predict the onset of the 1997 warm event fifteen months before its occurrence.
Using failure mode and effects analysis to plan implementation of smart i.v. pump technology.
Wetterneck, Tosha B; Skibinski, Kathleen A; Roberts, Tanita L; Kleppin, Susan M; Schroeder, Mark E; Enloe, Myra; Rough, Steven S; Hundt, Ann Schoofs; Carayon, Pascale
2006-08-15
Failure mode and effects analysis (FMEA) was used to evaluate a smart i.v. pump as it was implemented into a redesigned medication-use process. A multidisciplinary team conducted a FMEA to guide the implementation of a smart i.v. pump that was designed to prevent pump programming errors. The smart i.v. pump was equipped with a dose-error reduction system that included a pre-defined drug library in which dosage limits were set for each medication. Monitoring for potential failures and errors occurred for three months postimplementation of FMEA. Specific measures were used to determine the success of the actions that were implemented as a result of the FMEA. The FMEA process at the hospital identified key failure modes in the medication process with the use of the old and new pumps, and actions were taken to avoid errors and adverse events. I.V. pump software and hardware design changes were also recommended. Thirteen of the 18 failure modes reported in practice after pump implementation had been identified by the team. A beneficial outcome of FMEA was the development of a multidisciplinary team that provided the infrastructure for safe technology implementation and effective event investigation after implementation. With the continual updating of i.v. pump software and hardware after implementation, FMEA can be an important starting place for safe technology choice and implementation and can produce site experts to follow technology and process changes over time. FMEA was useful in identifying potential problems in the medication-use process with the implementation of new smart i.v. pumps. Monitoring for system failures and errors after implementation remains necessary.
Nada, Masahiro; Nakamura, Makoto; Matsuzaki, Hideaki
2014-01-13
25-Gbit/s error-free operation of an optical receiver is successfully demonstrated against burst-mode optical input signals without preambles. The receiver, with a high-sensitivity avalanche photodiode and burst-mode transimpedance amplifier, exhibits sufficient receiver sensitivity and an extremely quick response suitable for burst-mode operation in 100-Gbit/s optical packet switching.
Silva, Felipe O.; Hemerly, Elder M.; Leite Filho, Waldemar C.
2017-01-01
This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC) problem of strapdown inertial navigation systems (SINS). The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors) are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions. PMID:28241494
Investigation of an Anomaly Observed in Impedance Eduction Techniques
NASA Technical Reports Server (NTRS)
Watson, W. R.; Jones, M. G.; Parrott, T. L.
2008-01-01
An intensive investigation into the cause of anomalous behavior commonly observed in impedance eduction techniques is performed. The investigation consists of grid refinement studies, detailed evaluation of results at and near anti-resonance frequencies, comparisons of different model results with synthesized and measured data, assessment or optimization techniques, and evaluation or boundary condition effects. Results show that the root cause of the anomalous behavior is the sensitivity of the educed impedance to small errors in the measured termination resistance at frequencies near anti-resonance or cut-on of a higher-order mode. Evidence is presented to show that the common usage of an anechoic, plane wave termination boundary condition in ducts where the "true" termination is reflective may act as a trigger for these anomalies. Replacing the exit impedance boundary condition by an exit pressure condition is shown to reduce the anomalous results.
NASA Technical Reports Server (NTRS)
Diorio, Kimberly A.; Voska, Ned (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.
Yang, Chunyong; Xu, Chuang; Ni, Wenjun; Gan, Yu; Hou, Jin; Chen, Shaoping
2017-10-16
A novel scheme is proposed to mitigate the atmospheric turbulence effect in free space optical (FSO) communication employing orbital angular momentum (OAM) multiplexing. In this scheme, the Gaussian beam is used as an auxiliary light with a common-path to obtain the distortion information caused by atmospheric turbulence. After turbulence, the heterodyne coherent detection technology is demonstrated to realize the turbulence mitigation. With the same turbulence distortion, the OAM beams and the Gaussian beam are respectively utilized as the signal light and the local oscillation light. Then the turbulence distortion is counteracted to a large extent. Meanwhile, a phase matching method is proposed to select the specific OAM mode. The discrimination between the neighboring OAM modes is obviously improved by detecting the output photocurrent. Moreover, two methods of beam size adjustment have been analyzed to achieve better performance for turbulence mitigation. Numerical results show that the system bit error rate (BER) can reach 10 -5 under strong turbulence in simulation situation.
Updating finite element dynamic models using an element-by-element sensitivity methodology
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Hemez, Francois M.
1993-01-01
A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.
Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R
NASA Astrophysics Data System (ADS)
Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.
2016-12-01
Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.
Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric
2010-04-01
The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.
NASA Astrophysics Data System (ADS)
Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin
2018-04-01
This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.
Dippel, Gabriel; Chmielewski, Witold; Mückschel, Moritz; Beste, Christian
2016-11-01
Response inhibition processes are one of the most important executive control functions and have been subject to intense research in cognitive neuroscience. However, knowledge on the neurophysiology and functional neuroanatomy on response inhibition is biased because studies usually employ experimental paradigms (e.g., sustained attention to response task, SART) in which behavior is susceptible to impulsive errors. Here, we investigate whether there are differences in neurophysiological mechanisms and networks depending on the response mode that predominates behavior in a response inhibition task. We do so comparing a SART with a traditionally formatted task paradigm. We use EEG-beamforming in two tasks inducing opposite response modes during action selection. We focus on theta frequency modulations, since these are implicated in cognitive control processes. The results show that a response mode that is susceptible to impulsive errors (response mode used in the SART) is associated with stronger theta band activity in the left temporo-parietal junction. The results suggest that the response modes applied during response inhibition differ in the encoding of surprise signals, or related processes of attentional sampling. Response modes during response inhibition seem to differ in processes necessary to update task representations relevant to behavioral control.
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
Q-mode versus R-mode principal component analysis for linear discriminant analysis (LDA)
NASA Astrophysics Data System (ADS)
Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz
2017-05-01
Many literature apply Principal Component Analysis (PCA) as either preliminary visualization or variable con-struction methods or both. Focus of PCA can be on the samples (R-mode PCA) or variables (Q-mode PCA). Traditionally, R-mode PCA has been the usual approach to reduce high-dimensionality data before the application of Linear Discriminant Analysis (LDA), to solve classification problems. Output from PCA composed of two new matrices known as loadings and scores matrices. Each matrix can then be used to produce a plot, i.e. loadings plot aids identification of important variables whereas scores plot presents spatial distribution of samples on new axes that are also known as Principal Components (PCs). Fundamentally, the scores matrix always be the input variables for building classification model. A recent paper uses Q-mode PCA but the focus of analysis was not on the variables but instead on the samples. As a result, the authors have exchanged the use of both loadings and scores plots in which clustering of samples was studied using loadings plot whereas scores plot has been used to identify important manifest variables. Therefore, the aim of this study is to statistically validate the proposed practice. Evaluation is based on performance of external error obtained from LDA models according to number of PCs. On top of that, bootstrapping was also conducted to evaluate the external error of each of the LDA models. Results show that LDA models produced by PCs from R-mode PCA give logical performance and the matched external error are also unbiased whereas the ones produced with Q-mode PCA show the opposites. With that, we concluded that PCs produced from Q-mode is not statistically stable and thus should not be applied to problems of classifying samples, but variables. We hope this paper will provide some insights on the disputable issues.
[Refractive errors in patients with cerebral palsy].
Mrugacz, Małgorzata; Bandzul, Krzysztof; Kułak, Wojciech; Poppe, Ewa; Jurowski, Piotr
2013-04-01
Ocular changes are common in patients with cerebral palsy (CP) and they exist in about 50% of cases. The most common are refractive errors and strabismus disease. The aim of the paper was to estimate the relativeness between refractive errors and neurological pathologies in patients with selected types of CP. MATERIAL AND METHODS. The subject of the analysis was showing refractive errors in patients within two groups of CP: diplegia spastica and tetraparesis, with nervous system pathologies taken into account. Results. This study was proven some correlations between refractive errors and type of CP and severity of the CP classified in GMFCS scale. Refractive errors were more common in patients with tetraparesis than with diplegia spastica. In the group with diplegia spastica more common were myopia and astigmatism, however in tetraparesis - hyperopia.
NASA Technical Reports Server (NTRS)
Brown, G. S.; Curry, W. J.
1977-01-01
The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.
Direct Geolocation of TerraSAR-X Spotlight Mode Image and Error Correction
NASA Astrophysics Data System (ADS)
Zhou, Xiao; Zeng, Qiming; Jiao, Jian; Zhang, Jingfa; Gong, Lixia
2013-01-01
The GERMAN TerraSAR-X mission was launched in June 2007, operating a versatile new-generation SAR sensor in X-band. Its Spotlight mode providing SAR images at very high resolution of about 1m. The product’s specified 3-D geolocation accuracy is tightened to 1m according to the official technical report. However, this accuracy is able to be achieved relies on not only robust mathematical basis of SAR geolocation, but also well knowledge of error sources and their correction. The research focuses on geolocation of TerraSAR-X spotlight image. Mathematical model and resolving algorithms have been analyzed. Several error sources have been researched and corrected especially. The effectiveness and accuracy of the research was verified by the experiment results.
NASA Astrophysics Data System (ADS)
Jiang, YuXiao; Guo, PengLiang; Gao, ChengYan; Wang, HaiBo; Alzahrani, Faris; Hobiny, Aatef; Deng, FuGuo
2017-12-01
We present an original self-error-rejecting photonic qubit transmission scheme for both the polarization and spatial states of photon systems transmitted over collective noise channels. In our scheme, we use simple linear-optical elements, including half-wave plates, 50:50 beam splitters, and polarization beam splitters, to convert spatial-polarization modes into different time bins. By using postselection in different time bins, the success probability of obtaining the uncorrupted states approaches 1/4 for single-photon transmission, which is not influenced by the coefficients of noisy channels. Our self-error-rejecting transmission scheme can be generalized to hyperentangled n-photon systems and is useful in practical high-capacity quantum communications with photon systems in two degrees of freedom.
NASA Astrophysics Data System (ADS)
Drake, J. R.; Brunsell, P. R.; Yadikin, D.; Cecconello, M.; Malmberg, J. A.; Gregoratto, D.; Paccagnella, R.; Bolzonella, T.; Manduchi, G.; Marrelli, L.; Ortolani, S.; Spizzo, G.; Zanca, P.; Bondeson, A.; Liu, Y. Q.
2005-07-01
Active feedback control of resistive wall modes (RWMs) has been demonstrated in the EXTRAP T2R reversed-field pinch experiment. The control system includes a sensor consisting of an array of magnetic coils (measuring mode harmonics) and an actuator consisting of a saddle coil array (producing control harmonics). Closed-loop (feedback) experiments using a digital controller based on a real time Fourier transform of sensor data have been studied for cases where the feedback gain was constant and real for all harmonics (corresponding to an intelligent-shell) and cases where the feedback gain could be set for selected harmonics, with both real and complex values (targeted harmonics). The growth of the dominant RWMs can be reduced by feedback for both the intelligent-shell and targeted-harmonic control systems. Because the number of toroidal positions of the saddle coils in the array is half the number of the sensors, it is predicted and observed experimentally that the control harmonic spectrum has sidebands. Individual unstable harmonics can be controlled with real gains. However if there are two unstable mode harmonics coupled by the sideband effect, control is much less effective with real gains. According to the theory, complex gains give better results for (slowly) rotating RWMs, and experiments support this prediction. In addition, open loop experiments have been used to observe the effects of resonant field errors applied to unstable, marginally stable and robustly stable modes. The observed effects of field errors are consistent with the thin-wall model, where mode growth is proportional to the resonant field error amplitude and the wall penetration time for that mode harmonic.
Beard, B B; Stewart, J R; Shiavi, R G; Lorenz, C H
1995-01-01
Gating methods developed for electrocardiographic-triggered radionuclide ventriculography are being used with nonimaging detectors. These methods have not been compared on the basis of their real-time performance or suitability for determination of load-independent indexes of left ventricular function. This work evaluated the relative merits of different gating methods for nonimaging radionuclude ventriculographic studies, with particular emphasis on their suitability for real-time measurements and the determination of pressure-volume loops. A computer model was used to investigate the relative accuracy of forward gating, backward gating, and phase-mode gating. The durations of simulated left ventricular time-activity curves were randomly varied. Three acquisition parameters were considered: frame rate, acceptance window, and sample size. Twenty-five studies were performed for each combination of acquisition parameters. Hemodynamic and shape parameters from each study were compared with reference parameters derived directly from the random time-activity curves. Backward gating produced the largest errors under all conditions. For both forward gating and phase-mode gating, ejection fraction was underestimated and time to end systole and normalized peak ejection rate were overestimated. For the hemodynamic parameters, forward gating was marginally superior to phase-mode gating. The mean difference in errors between forward and phase-mode gating was 1.47% (SD 2.78%). However, for root mean square shape error, forward gating was several times worse in every case and seven times worse than phase-mode gating on average. Both forward and phase-mode gating are suitable for real-time hemodynamic measurements by nonimaging techniques. The small statistical difference between the methods is not clinically significant. The true shape of the time-activity curve is maintained most accurately by phase-mode gating.
Beard, Brian B.; Stewart, James R.; Shiavi, Richard G.; Lorenz, Christine H.
2018-01-01
Background Gating methods developed for electrocardiographic-triggered radionuclide ventriculography are being used with nonimaging detectors. These methods have not been compared on the basis of their real-time performance or suitability for determination of load-independent indexes of left ventricular function. This work evaluated the relative merits of different gating methods for nonimaging radionuclude ventriculographic studies, with particular emphasis on their suitability for real-time measurements and the determination of pressure-volume loops. Methods and Results A computer model was used to investigate the relative accuracy of forward gating, backward gating, and phase-mode gating. The durations of simulated left ventricular time-activity curves were randomly varied. Three acquisition parameters were considered: frame rate, acceptance window, and sample size. Twenty-five studies were performed for each combination of acquisition parameters. Hemodynamic and shape parameters from each study were compared with reference parameters derived directly from the random time-activity curves. Backward gating produced the largest errors under all conditions. For both forward gating and phase-mode gating, ejection fraction was underestimated and time to end systole and normalized peak ejection rate were overestimated. For the hemodynamic parameters, forward gating was marginally superior to phase-mode gating. The mean difference in errors between forward and phase-mode gating was 1.47% (SD 2.78%). However, for root mean square shape error, forward gating was several times worse in every case and seven times worse than phase-mode gating on average. Conclusions Both forward and phase-mode gating are suitable for real-time hemodynamic measurements by nonimaging techniques. The small statistical difference between the methods is not clinically significant. The true shape of the time-activity curve is maintained most accurately by phase-mode gating. PMID:9420820
WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kry, S; Dromgoole, L; Alvarez, P
Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutionsmore » were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly in areas highlighted herein that show a tendency for errors.« less
Fourier mode analysis of slab-geometry transport iterations in spatially periodic media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, E; Zika, M
1999-04-01
We describe a Fourier analysis of the diffusion-synthetic acceleration (DSA) and transport-synthetic acceleration (TSA) iteration schemes for a spatially periodic, but otherwise arbitrarily heterogeneous, medium. Both DSA and TSA converge more slowly in a heterogeneous medium than in a homogeneous medium composed of the volume-averaged scattering ratio. In the limit of a homogeneous medium, our heterogeneous analysis contains eigenvalues of multiplicity two at ''resonant'' wave numbers. In the presence of material heterogeneities, error modes corresponding to these resonant wave numbers are ''excited'' more than other error modes. For DSA and TSA, the iteration spectral radius may occur at these resonantmore » wave numbers, in which case the material heterogeneities most strongly affect iterative performance.« less
Zhou, Lin; Long, Shitong; Tang, Biao; Chen, Xi; Gao, Fen; Peng, Wencui; Duan, Weitao; Zhong, Jiaqi; Xiong, Zongyuan; Wang, Jin; Zhang, Yuanzhong; Zhan, Mingsheng
2015-07-03
We report an improved test of the weak equivalence principle by using a simultaneous 85Rb-87Rb dual-species atom interferometer. We propose and implement a four-wave double-diffraction Raman transition scheme for the interferometer, and demonstrate its ability in suppressing common-mode phase noise of Raman lasers after their frequencies and intensity ratios are optimized. The statistical uncertainty of the experimental data for Eötvös parameter η is 0.8×10(-8) at 3200 s. With various systematic errors corrected, the final value is η=(2.8±3.0)×10(-8). The major uncertainty is attributed to the Coriolis effect.
Heterodyne interferometer with subatomic periodic nonlinearity.
Wu, C M; Lawall, J; Deslattes, R D
1999-07-01
A new, to our knowledge, heterodyne interferometer for differential displacement measurements is presented. It is, in principle, free of periodic nonlinearity. A pair of spatially separated light beams with different frequencies is produced by two acousto-optic modulators, avoiding the main source of periodic nonlinearity in traditional heterodyne interferometers that are based on a Zeeman split laser. In addition, laser beams of the same frequency are used in the measurement and the reference arms, giving the interferometer theoretically perfect immunity from common-mode displacement. We experimentally demonstrated a residual level of periodic nonlinearity of less than 20 pm in amplitude. The remaining periodic error is attributed to unbalanced ghost reflections that drift slowly with time.
Reducing medication errors and increasing patient safety: case studies in clinical pharmacology.
Benjamin, David M
2003-07-01
Today, reducing medication errors and improving patient safety have become common topics of discussion for the president of the United States, federal and state legislators, the insurance industry, pharmaceutical companies, health care professionals, and patients. But this is not news to clinical pharmacologists. Improving the judicious use of medications and minimizing adverse drug reactions have always been key areas of research and study for those working in clinical pharmacology. However, added to the older terms of adverse drug reactions and rational therapeutics, the now politically correct expression of medication error has emerged. Focusing on the word error has drawn attention to "prevention" and what can be done to minimize mistakes and improve patient safety. Webster's New Collegiate Dictionary has several definitions of error, but the one that seems to be most appropriate in the context of medication errors is "an act that through ingnorance, deficiency, or accident departs from or fails to achieve what should be done." What should be done is generally known as "the five rights": the right drug, right dose, right route, right time, and right patient. One can make an error of omission (failure to act correctly) or an error of commission (acted incorrectly). This article now summarizes what is currently known about medication errors and translates the information into case studies illustrating common scenarios leading to medication errors. Each case is analyzed to provide insight into how the medication error could have been prevented. "System errors" are described, and the application of failure mode effect analysis (FMEA) is presented to determine the part of the "safety net" that failed. Examples of reengineering the system to make it more "error proof" are presented. An error can be prevented. However, the practice of medicine, pharmacy, and nursing in the hospital setting is very complicated, and so many steps occur from "pen to patient" that there is a lot to analyze. Implementing safer practices requires developing safer systems. Many errors occur as a result of poor oral or written communications. Enhanced communication skills and better interactions among members of the health care team and the patient are essential. The informed consent process should be used as a patient safety tool, and the patient should be warned about material and foreseeable serious side effects and be told what signs and symptoms should be immediately reported to the physician before the patient is forced to go to the emergency department for urgent or emergency care. Last, reducing medication errors is an ongoing process of quality improvement. Faculty systems must be redesigned, and seamless, computerized integrated medication delivery must be instituted by health care professionals adequately trained to use such technological advances. Sloppy handwritten prescriptions should be replaced by computerized physician order entry, a very effective technique for reducing prescribing/ordering errors, but another far less expensive yet effective change would involve writing all drug orders in plain English, rather than continuing to use the elitists' arcane Latin words and shorthand abbreviations that are subject to misinterpretation. After all, effective communication is best accomplished when it is clear and simple.
NASA Astrophysics Data System (ADS)
Amphawan, Angela; Ghazi, Alaan; Al-dawoodi, Aras
2017-11-01
A free-space optics mode-wavelength division multiplexing (MWDM) system using Laguerre-Gaussian (LG) modes is designed using decision feedback equalization for controlling mode coupling and combating inter symbol interference so as to increase channel diversity. In this paper, a data rate of 24 Gbps is achieved for a FSO MWDM channel of 2.6 km in length using feedback equalization. Simulation results show significant improvement in eye diagrams and bit-error rates before and after decision feedback equalization.
Design and Verification of a Digital Controller for a 2-Piece Hemispherical Resonator Gyroscope.
Lee, Jungshin; Yun, Sung Wook; Rhim, Jaewook
2016-04-20
A Hemispherical Resonator Gyro (HRG) is the Coriolis Vibratory Gyro (CVG) that measures rotation angle or angular velocity using Coriolis force acting the vibrating mass. A HRG can be used as a rate gyro or integrating gyro without structural modification by simply changing the control scheme. In this paper, differential control algorithms are designed for a 2-piece HRG. To design a precision controller, the electromechanical modelling and signal processing must be pre-performed accurately. Therefore, the equations of motion for the HRG resonator with switched harmonic excitations are derived with the Duhamel Integral method. Electromechanical modeling of the resonator, electric module and charge amplifier is performed by considering the mode shape of a thin hemispherical shell. Further, signal processing and control algorithms are designed. The multi-flexing scheme of sensing, driving cycles and x, y-axis switching cycles is appropriate for high precision and low maneuverability systems. The differential control scheme is easily capable of rejecting the common mode errors of x, y-axis signals and changing the rate integrating mode on basis of these studies. In the rate gyro mode the controller is composed of Phase-Locked Loop (PLL), amplitude, quadrature and rate control loop. All controllers are designed on basis of a digital PI controller. The signal processing and control algorithms are verified through Matlab/Simulink simulations. Finally, a FPGA and DSP board with these algorithms is verified through experiments.
Mathematical Writing Errors in Expository Writings of College Mathematics Students
ERIC Educational Resources Information Center
Guce, Ivee K.
2017-01-01
Despite the efforts to confirm the effectiveness of writing in learning mathematics, analysis on common errors in mathematical writings has not received sufficient attention. This study aimed to provide an account of the students' procedural explanations in terms of their commonly committed errors in mathematical writing. Nine errors in…
Exponential error reduction in pretransfusion testing with automation.
South, Susan F; Casina, Tony S; Li, Lily
2012-08-01
Protecting the safety of blood transfusion is the top priority of transfusion service laboratories. Pretransfusion testing is a critical element of the entire transfusion process to enhance vein-to-vein safety. Human error associated with manual pretransfusion testing is a cause of transfusion-related mortality and morbidity and most human errors can be eliminated by automated systems. However, the uptake of automation in transfusion services has been slow and many transfusion service laboratories around the world still use manual blood group and antibody screen (G&S) methods. The goal of this study was to compare error potentials of commonly used manual (e.g., tiles and tubes) versus automated (e.g., ID-GelStation and AutoVue Innova) G&S methods. Routine G&S processes in seven transfusion service laboratories (four with manual and three with automated G&S methods) were analyzed using failure modes and effects analysis to evaluate the corresponding error potentials of each method. Manual methods contained a higher number of process steps ranging from 22 to 39, while automated G&S methods only contained six to eight steps. Corresponding to the number of the process steps that required human interactions, the risk priority number (RPN) of the manual methods ranged from 5304 to 10,976. In contrast, the RPN of the automated methods was between 129 and 436 and also demonstrated a 90% to 98% reduction of the defect opportunities in routine G&S testing. This study provided quantitative evidence on how automation could transform pretransfusion testing processes by dramatically reducing error potentials and thus would improve the safety of blood transfusion. © 2012 American Association of Blood Banks.
Thermodynamics of Anharmonic Systems: Uncoupled Mode Approximations for Molecules
Li, Yi-Pei; Bell, Alexis T.; Head-Gordon, Martin
2016-05-26
The partition functions, heat capacities, entropies, and enthalpies of selected molecules were calculated using uncoupled mode (UM) approximations, where the full-dimensional potential energy surface for internal motions was modeled as a sum of independent one-dimensional potentials for each mode. The computational cost of such approaches scales the same with molecular size as standard harmonic oscillator vibrational analysis using harmonic frequencies (HO hf). To compute thermodynamic properties, a computational protocol for obtaining the energy levels of each mode was established. The accuracy of the UM approximation depends strongly on how the one-dimensional potentials of each modes are defined. If the potentialsmore » are determined by the energy as a function of displacement along each normal mode (UM-N), the accuracies of the calculated thermodynamic properties are not significantly improved versus the HO hf model. Significant improvements can be achieved by constructing potentials for internal rotations and vibrations using the energy surfaces along the torsional coordinates and the remaining vibrational normal modes, respectively (UM-VT). For hydrogen peroxide and its isotopologs at 300 K, UM-VT captures more than 70% of the partition functions on average. By con trast, the HO hf model and UM-N can capture no more than 50%. For a selected test set of C2 to C8 linear and branched alkanes and species with different moieties, the enthalpies calculated using the HO hf model, UM-N, and UM-VT are all quite accurate comparing with reference values though the RMS errors of the HO model and UM-N are slightly higher than UM-VT. However, the accuracies in entropy calculations differ significantly between these three models. For the same test set, the RMS error of the standard entropies calculated by UM-VT is 2.18 cal mol -1 K -1 at 1000 K. By contrast, the RMS error obtained using the HO model and UM-N are 6.42 and 5.73 cal mol -1 K -1, respectively. For a test set composed of nine alkanes ranging from C5 to C8, the heat capacities calculated with the UM-VT model agree with the experimental values to within a RMS error of 0.78 cal mol -1 K -1 , which is less than one-third of the RMS error of the HO hf (2.69 cal mol -1 K -1) and UM-N (2.41 cal mol -1 K -1) models.« less
Wagner, James; Schroeder, Heather M.; Piskorowski, Andrew; Ursano, Robert J.; Stein, Murray B.; Heeringa, Steven G.; Colpe, Lisa J.
2017-01-01
Mixed-mode surveys need to determine a number of design parameters that may have a strong influence on costs and errors. In a sequential mixed-mode design with web followed by telephone, one of these decisions is when to switch modes. The web mode is relatively inexpensive but produces lower response rates. The telephone mode complements the web mode in that it is relatively expensive but produces higher response rates. Among the potential negative consequences, delaying the switch from web to telephone may lead to lower response rates if the effectiveness of the prenotification contact materials is reduced by longer time lags, or if the additional e-mail reminders to complete the web survey annoy the sampled person. On the positive side, delaying the switch may decrease the costs of the survey. We evaluate these costs and errors by experimentally testing four different timings (1, 2, 3, or 4 weeks) for the mode switch in a web–telephone survey. This experiment was conducted on the fourth wave of a longitudinal study of the mental health of soldiers in the U.S. Army. We find that the different timings of the switch in the range of 1–4 weeks do not produce differences in final response rates or key estimates but longer delays before switching do lead to lower costs. PMID:28943717
Characteristics of pediatric chemotherapy medication errors in a national error reporting database.
Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R
2007-07-01
Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged <18 years. Of the 310 pediatric chemotherapy error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
Analysis of Free-Space Coupling to Photonic Lanterns in the Presence of Tilt Errors
2017-05-01
Analysis of Free- Space Coupling to Photonic Lanterns in the Presence of Tilt Errors Timothy M. Yarnall, David J. Geisler, Curt M. Schieler...Massachusetts Avenue Cambridge, MA 02139, USA Abstract—Free space coupling to photonic lanterns is more tolerant to tilt errors and F -number mismatch than...these errors. I. INTRODUCTION Photonic lanterns provide a means for transitioning from the free space regime to the single-mode fiber (SMF) regime by
De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets
NASA Astrophysics Data System (ADS)
Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.
2017-08-01
The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.
Force Analysis and Energy Operation of Chaotic System of Permanent-Magnet Synchronous Motor
NASA Astrophysics Data System (ADS)
Qi, Guoyuan; Hu, Jianbing
2017-12-01
The disadvantage of a nondimensionalized model of a permanent-magnet synchronous Motor (PMSM) is identified. The original PMSM model is transformed into a Kolmogorov system to aid dynamic force analysis. The vector field of the PMSM is analogous to the force field including four types of torque — inertial, internal, dissipative, and generalized external. Using the feedback thought, the error torque between external torque and dissipative torque is identified. The pitchfork bifurcation of the PMSM is performed. Four forms of energy are identified for the system — kinetic, potential, dissipative, and supplied. The physical interpretations of the decomposition of force and energy exchange are given. Casimir energy is stored energy, and its rate of change is the error power between the dissipative energy and the energy supplied to the motor. Error torque and error power influence the different types of dynamic modes. The Hamiltonian energy and Casimir energy are compared to find the function of each in producing the dynamic modes. A supremum bound for the chaotic attractor is proposed using the error power and Lagrange multiplier.
Faerber, Julia; Cummins, Gerard; Pavuluri, Sumanth Kumar; Record, Paul; Rodriguez, Adrian R Ayastuy; Lay, Holly S; McPhillips, Rachael; Cox, Benjamin F; Connor, Ciaran; Gregson, Rachael; Clutton, Richard Eddie; Khan, Sadeque Reza; Cochran, Sandy; Desmulliez, Marc P Y
2018-02-01
This paper describes the design, fabrication, packaging, and performance characterization of a conformal helix antenna created on the outside of a capsule endoscope designed to operate at a carrier frequency of 433 MHz within human tissue. Wireless data transfer was established between the integrated capsule system and an external receiver. The telemetry system was tested within a tissue phantom and in vivo porcine models. Two different types of transmission modes were tested. The first mode, replicating normal operating conditions, used data packets at a steady power level of 0 dBm, while the capsule was being withdrawn at a steady rate from the small intestine. The second mode, replicating the worst-case clinical scenario of capsule retention within the small bowel, sent data with stepwise increasing power levels of -10, 0, 6, and 10 dBm, with the capsule fixed in position. The temperature of the tissue surrounding the external antenna was monitored at all times using thermistors embedded within the capsule shell to observe potential safety issues. The recorded data showed, for both modes of operation, a low error transmission of 10 -3 packet error rate and 10 -5 bit error rate and no temperature increase of the tissue according to IEEE standards.
Reduction of ZTD outliers through improved GNSS data processing and screening strategies
NASA Astrophysics Data System (ADS)
Stepniak, Katarzyna; Bock, Olivier; Wielgosz, Pawel
2018-03-01
Though Global Navigation Satellite System (GNSS) data processing has been significantly improved over the years, it is still commonly observed that zenith tropospheric delay (ZTD) estimates contain many outliers which are detrimental to meteorological and climatological applications. In this paper, we show that ZTD outliers in double-difference processing are mostly caused by sub-daily data gaps at reference stations, which cause disconnections of clusters of stations from the reference network and common mode biases due to the strong correlation between stations in short baselines. They can reach a few centimetres in ZTD and usually coincide with a jump in formal errors. The magnitude and sign of these biases are impossible to predict because they depend on different errors in the observations and on the geometry of the baselines. We elaborate and test a new baseline strategy which solves this problem and significantly reduces the number of outliers compared to the standard strategy commonly used for positioning (e.g. determination of national reference frame) in which the pre-defined network is composed of a skeleton of reference stations to which secondary stations are connected in a star-like structure. The new strategy is also shown to perform better than the widely used strategy maximizing the number of observations available in many GNSS programs. The reason is that observations are maximized before processing, whereas the final number of used observations can be dramatically lower because of data rejection (screening) during the processing. The study relies on the analysis of 1 year of GPS (Global Positioning System) data from a regional network of 136 GNSS stations processed using Bernese GNSS Software v.5.2. A post-processing screening procedure is also proposed to detect and remove a few outliers which may still remain due to short data gaps. It is based on a combination of range checks and outlier checks of ZTD and formal errors. The accuracy of the final screened GPS ZTD estimates is assessed by comparison to ERA-Interim reanalysis.
Closed-loop focal plane wavefront control with the SCExAO instrument
NASA Astrophysics Data System (ADS)
Martinache, Frantz; Jovanovic, Nemanja; Guyon, Olivier
2016-09-01
Aims: This article describes the implementation of a focal plane based wavefront control loop on the high-contrast imaging instrument SCExAO (Subaru Coronagraphic Extreme Adaptive Optics). The sensor relies on the Fourier analysis of conventional focal-plane images acquired after an asymmetric mask is introduced in the pupil of the instrument. Methods: This absolute sensor is used here in a closed-loop to compensate for the non-common path errors that normally affects any imaging system relying on an upstream adaptive optics system.This specific implementation was used to control low-order modes corresponding to eight zernike modes (from focus to spherical). Results: This loop was successfully run on-sky at the Subaru Telescope and is used to offset the SCExAO deformable mirror shape used as a zero-point by the high-order wavefront sensor. The paper details the range of errors this wavefront-sensing approach can operate within and explores the impact of saturation of the data and how it can be bypassed, at a cost in performance. Conclusions: Beyond this application, because of its low hardware impact, the asymmetric pupil Fourier wavefront sensor (APF-WFS) can easily be ported in a wide variety of wavefront sensing contexts, for ground- as well space-borne telescopes, and for telescope pupils that can be continuous, segmented or even sparse. The technique is powerful because it measures the wavefront where it really matters, at the level of the science detector.
ECG fiducial point extraction using switching Kalman filter.
Akhbari, Mahsa; Ghahjaverestan, Nasim Montazeri; Shamsollahi, Mohammad B; Jutten, Christian
2018-04-01
In this paper, we propose a novel method for extracting fiducial points (FPs) of the beats in electrocardiogram (ECG) signals using switching Kalman filter (SKF). In this method, according to McSharry's model, ECG waveforms (P-wave, QRS complex and T-wave) are modeled with Gaussian functions and ECG baselines are modeled with first order auto regressive models. In the proposed method, a discrete state variable called "switch" is considered that affects only the observation equations. We denote a mode as a specific observation equation and switch changes between 7 modes and corresponds to different segments of an ECG beat. At each time instant, the probability of each mode is calculated and compared among two consecutive modes and a path is estimated, which shows the relation of each part of the ECG signal to the mode with the maximum probability. ECG FPs are found from the estimated path. For performance evaluation, the Physionet QT database is used and the proposed method is compared with methods based on wavelet transform, partially collapsed Gibbs sampler (PCGS) and extended Kalman filter. For our proposed method, the mean error and the root mean square error across all FPs are 2 ms (i.e. less than one sample) and 14 ms, respectively. These errors are significantly smaller than those obtained using other methods. The proposed method achieves lesser RMSE and smaller variability with respect to others. Copyright © 2018 Elsevier B.V. All rights reserved.
Yang, Yana; Hua, Changchun; Guan, Xinping
2016-03-01
Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.
New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction
NASA Astrophysics Data System (ADS)
Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.
2017-12-01
Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.
Common but unappreciated sources of error in one, two, and multiple-color pyrometry
NASA Technical Reports Server (NTRS)
Spjut, R. Erik
1988-01-01
The most common sources of error in optical pyrometry are examined. They can be classified as either noise and uncertainty errors, stray radiation errors, or speed-of-response errors. Through judicious choice of detectors and optical wavelengths the effect of noise errors can be minimized, but one should strive to determine as many of the system properties as possible. Careful consideration of the optical-collection system can minimize stray radiation errors. Careful consideration must also be given to the slowest elements in a pyrometer when measuring rapid phenomena.
NASA Astrophysics Data System (ADS)
Brunsell, P. R.; Olofsson, K. E. J.; Frassinetti, L.; Drake, J. R.
2007-10-01
Experiments in the EXTRAP T2R reversed field pinch [P. R. Brunsell, H. Bergsåker, M. Cecconello et al., Plasma Phys. Control. Fusion 43, 1457 (2001)] on feedback control of m =1 resistive wall modes (RWMs) are compared with simulations using the cylindrical linear magnetohydrodynamic model, including the dynamics of the active coils and power amplifiers. Stabilization of the main RWMs (n=-11,-10,-9,-8,+5,+6) is shown using modest loop gains of the order G ˜1. However, other marginally unstable RWMs (n=-2,-1,+1,+2) driven by external field errors are only partially canceled at these gains. The experimental system stability limit is confirmed by simulations showing that the latency of the digital controller ˜50μs is degrading the system gain margin. The transient response is improved with a proportional-plus-derivative controller, and steady-state error is improved with a proportional-plus-integral controller. Suppression of all modes is obtained at high gain G ˜10 using a proportional-plus-integral-plus-derivative controller.
Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel
2014-01-01
Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.
NASA Technical Reports Server (NTRS)
Marshall, Paul; Carts, Marty; Campbell, Art; Reed, Robert; Ladbury, Ray; Seidleck, Christina; Currie, Steve; Riggs, Pam; Fritz, Karl; Randall, Barb
2004-01-01
A viewgraph presentation that reviews recent SiGe bit error test data for different commercially available high speed SiGe BiCMOS chips that were subjected to various levels of heavy ion and proton radiation. Results for the tested chips at different operating speeds are displayed in line graphs.
Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa
2013-01-01
To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.
[Epidemiology of refractive errors].
Wolfram, C
2017-07-01
Refractive errors are very common and can lead to severe pathological changes in the eye. This article analyzes the epidemiology of refractive errors in the general population in Germany and worldwide and describes common definitions for refractive errors and clinical characteristics for pathologicaal changes. Refractive errors differ between age groups due to refractive changes during the life time and also due to generation-specific factors. Current research about the etiology of refractive errors has strengthened the influence of environmental factors, which led to new strategies for the prevention of refractive pathologies.
Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L
2010-02-01
This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
El-Banna, Adel I.; Naeem, Marwa A.
2016-01-01
This research work aimed at making use of Machine Translation to help students avoid some syntactic, semantic and pragmatic common errors in translation from English into Arabic. Participants were a hundred and five freshmen who studied the "Translation Common Errors Remedial Program" prepared by the researchers. A testing kit that…
NASA Technical Reports Server (NTRS)
Li, Yue (Inventor); Bruck, Jehoshua (Inventor)
2018-01-01
A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.
General Aviation Avionics Statistics.
1980-12-01
designed to produce standard errors on these variables at levels specified by the FAA. No controls were placed on the standard errors of the non-design...Transponder Encoding Requirement. and Mode CAutomatic (11as been deleted) Altitude Reporting Ca- pabili.,; Two-way Radio; VOR or TACAN Receiver. Remaining 42
Schulz, Christian M; Burden, Amanda; Posner, Karen L; Mincer, Shawn L; Steadman, Randolph; Wagner, Klaus J; Domino, Karen B
2017-08-01
Situational awareness errors may play an important role in the genesis of patient harm. The authors examined closed anesthesia malpractice claims for death or brain damage to determine the frequency and type of situational awareness errors. Surgical and procedural anesthesia death and brain damage claims in the Anesthesia Closed Claims Project database were analyzed. Situational awareness error was defined as failure to perceive relevant clinical information, failure to comprehend the meaning of available information, or failure to project, anticipate, or plan. Patient and case characteristics, primary damaging events, and anesthesia payments in claims with situational awareness errors were compared to other death and brain damage claims from 2002 to 2013. Anesthesiologist situational awareness errors contributed to death or brain damage in 198 of 266 claims (74%). Respiratory system damaging events were more common in claims with situational awareness errors (56%) than other claims (21%, P < 0.001). The most common specific respiratory events in error claims were inadequate oxygenation or ventilation (24%), difficult intubation (11%), and aspiration (10%). Payments were made in 85% of situational awareness error claims compared to 46% in other claims (P = 0.001), with no significant difference in payment size. Among 198 claims with anesthesia situational awareness error, perception errors were most common (42%), whereas comprehension errors (29%) and projection errors (29%) were relatively less common. Situational awareness error definitions were operationalized for reliable application to real-world anesthesia cases. Situational awareness errors may have contributed to catastrophic outcomes in three quarters of recent anesthesia malpractice claims.Situational awareness errors resulting in death or brain damage remain prevalent causes of malpractice claims in the 21st century.
Medication errors in anesthesia: unacceptable or unavoidable?
Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra
Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.
Mi, Chris; Li, Siqi
2017-01-31
A bidirectional AC-DC converter is presented with reduced passive component size and common mode electro-magnetic interference. The converter includes an improved input stage formed by two coupled differential inductors, two coupled common and differential inductors, one differential capacitor and two common mode capacitors. With this input structure, the volume, weight and cost of the input stage can be reduced greatly. Additionally, the input current ripple and common mode electro-magnetic interference can be greatly attenuated, so lower switching frequency can be adopted to achieve higher efficiency.
Security evaluation of the quantum key distribution system with two-mode squeezed states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osaki, M.; Ban, M.
2003-08-01
The quantum key distribution (QKD) system with two-mode squeezed states has been demonstrated by Pereira et al. [Phys. Rev. A 62, 042311 (2000)]. They evaluate the security of the system based on the signal to noise ratio attained by a homodyne detector. In this paper, we discuss its security based on the error probability individually attacked by eavesdropper with the unambiguous or the error optimum detection. The influence of the energy loss at transmission channels is also taken into account. It will be shown that the QKD system is secure under these conditions.
Sliding mode control for Mars entry based on extended state observer
NASA Astrophysics Data System (ADS)
Lu, Kunfeng; Xia, Yuanqing; Shen, Ganghui; Yu, Chunmei; Zhou, Liuyu; Zhang, Lijun
2017-11-01
This paper addresses high-precision Mars entry guidance and control approach via sliding mode control (SMC) and Extended State Observer (ESO). First, differential flatness (DF) approach is applied to the dynamic equations of the entry vehicle to represent the state variables more conveniently. Then, the presented SMC law can guarantee the property of finite-time convergence of tracking error, which requires no information on high uncertainties that are estimated by ESO, and the rigorous proof of tracking error convergence is given. Finally, Monte Carlo simulation results are presented to demonstrate the effectiveness of the suggested approach.
NASA Astrophysics Data System (ADS)
Young, A. J.; Kuiken, T. A.; Hargrove, L. J.
2014-10-01
Objective. The purpose of this study was to determine the contribution of electromyography (EMG) data, in combination with a diverse array of mechanical sensors, to locomotion mode intent recognition in transfemoral amputees using powered prostheses. Additionally, we determined the effect of adding time history information using a dynamic Bayesian network (DBN) for both the mechanical and EMG sensors. Approach. EMG signals from the residual limbs of amputees have been proposed to enhance pattern recognition-based intent recognition systems for powered lower limb prostheses, but mechanical sensors on the prosthesis—such as inertial measurement units, position and velocity sensors, and load cells—may be just as useful. EMG and mechanical sensor data were collected from 8 transfemoral amputees using a powered knee/ankle prosthesis over basic locomotion modes such as walking, slopes and stairs. An offline study was conducted to determine the benefit of different sensor sets for predicting intent. Main results. EMG information was not as accurate alone as mechanical sensor information (p < 0.05) for any classification strategy. However, EMG in combination with the mechanical sensor data did significantly reduce intent recognition errors (p < 0.05) both for transitions between locomotion modes and steady-state locomotion. The sensor time history (DBN) classifier significantly reduced error rates compared to a linear discriminant classifier for steady-state steps, without increasing the transitional error, for both EMG and mechanical sensors. Combining EMG and mechanical sensor data with sensor time history reduced the average transitional error from 18.4% to 12.2% and the average steady-state error from 3.8% to 1.0% when classifying level-ground walking, ramps, and stairs in eight transfemoral amputee subjects. Significance. These results suggest that a neural interface in combination with time history methods for locomotion mode classification can enhance intent recognition performance; this strategy should be considered for future real-time experiments.
Zeraatchi, Alireza; Talebian, Mohammad-Taghi; Nejati, Amir; Dashti-Khavidaki, Simin
2013-07-01
Emergency departments (EDs) are characterized by simultaneous care of multiple patients with various medical conditions. Due to a large number of patients with complex diseases, speed and complexity of medication use, working in under-staffing and crowded environment, medication errors are commonly perpetrated by emergency care providers. This study was designed to evaluate the incidence of medication errors among patients attending to an ED in a teaching hospital in Iran. In this cross-sectional study, a total of 500 patients attending to ED were randomly assessed for incidence and types of medication errors. Some factors related to medication errors such as working shift, weekdays and schedule of the educational program of trainee were also evaluated. Nearly, 22% of patients experienced at least one medication error. The rate of medication errors were 0.41 errors per patient and 0.16 errors per ordered medication. The frequency of medication errors was higher in men, middle age patients, first weekdays, night-time work schedules and the first semester of educational year of new junior emergency medicine residents. More than 60% of errors were prescription errors by physicians and the remaining were transcription or administration errors by nurses. More than 35% of the prescribing errors happened during the selection of drug dose and frequency. The most common medication errors by nurses during the administration were omission error (16.2%) followed by unauthorized drug (6.4%). Most of the medication errors happened for anticoagulants and thrombolytics (41.2%) followed by antimicrobial agents (37.7%) and insulin (7.4%). In this study, at least one-fifth of the patients attending to ED experienced medication errors resulting from multiple factors. More common prescription errors happened during ordering drug dose and frequency. More common administration errors included dug omission or unauthorized drug.
Kaus, Joseph W; Harder, Edward; Lin, Teng; Abel, Robert; McCammon, J Andrew; Wang, Lingle
2015-06-09
Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges.
2016-01-01
Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges. PMID:26085821
A method to estimate the effect of deformable image registration uncertainties on daily dose mapping
Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin
2012-01-01
Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766
A Current-Mode Common-Mode Feedback Circuit (CMFB) with Rail-to-Rail Operation
NASA Astrophysics Data System (ADS)
Suadet, Apirak; Kasemsuwan, Varakorn
2011-03-01
This paper presents a current-mode common-mode feedback (CMFB) circuit with rail-to-rail operation. The CMFB is a stand-alone circuit, which can be connected to any low voltage transconductor without changing or upsetting the existing circuit. The proposed CMFB employs current mirrors, operating as common-mode detector and current amplifier to enhance the loop gain of the CMFB. The circuit employs positive feedback to enhance the output impedance and gain. The circuit has been designed using a 0.18
ERIC Educational Resources Information Center
Clyde, Jerremie; Wilkinson, Glenn R.
2012-01-01
The gamic mode is an innovative way of authoring scholarly history that goes beyond the printed text or digital simulations by using digital game technologies to allow the reader to interact with a scholarly argument through meaningful choice and trial and error. The gamic mode makes the way in which the past is constructed as history explicit by…
Clutch pressure estimation for a power-split hybrid transmission using nonlinear robust observer
NASA Astrophysics Data System (ADS)
Zhou, Bin; Zhang, Jianwu; Gao, Ji; Yu, Haisheng; Liu, Dong
2018-06-01
For a power-split hybrid transmission, using the brake clutch to realize the transition from electric drive mode to hybrid drive mode is an available strategy. Since the pressure information of the brake clutch is essential for the mode transition control, this research designs a nonlinear robust reduced-order observer to estimate the brake clutch pressure. Model uncertainties or disturbances are considered as additional inputs, thus the observer is designed in order that the error dynamics is input-to-state stable. The nonlinear characteristics of the system are expressed as the lookup tables in the observer. Moreover, the gain matrix of the observer is solved by two optimization procedures under the constraints of the linear matrix inequalities. The proposed observer is validated by offline simulation and online test, the results have shown that the observer achieves significant performance during the mode transition, as the estimation error is within a reasonable range, more importantly, it is asymptotically stable.
Random noise effects in pulse-mode digital multilayer neural networks.
Kim, Y C; Shanblatt, M A
1995-01-01
A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.
Brokaw, Elizabeth B; Holley, Rahsaan J; Lum, Peter S
2013-09-01
We have developed a novel robotic modality called Time Independent Functional Training (TIFT) that provides focused retraining of interjoint coordination after stroke. TIFT was implemented on the ARMin III exoskeleton and provides joint space walls that resist movement patterns that are inconsistent with the targeted interjoint coordination pattern. In a single test session, ten moderate to severely impaired individuals with chronic stroke practiced synchronous shoulder abduction and elbow extension in TIFT and also in a comparison mode commonly used in robotic therapy called end point tunnel training (EPTT). In EPTT, error is limited by forces applied to the hand that are normal to the targeted end point trajectory. The completion percentage of the movements was comparable between modes, but the coordination patterns used by subjects differed between modes. In TIFT, subjects performed the targeted pattern of synchronous shoulder abduction and elbow extension, while in EPTT, movements were completed with compensatory strategies that incorporated the flexor synergy (shoulder abduction with elbow flexion) or the extensor synergy (shoulder adduction with elbow extension). There were immediate effects on free movements, with TIFT resulting in larger improvements in interjoint coordination than EPTT. TIFT's ability to elicit normal coordination patterns merits further investigation into the effects of longer duration training.
Magnetic Fluctuation-Driven Intrinsic Flow in a Toroidal Plasma
NASA Astrophysics Data System (ADS)
Brower, D. L.; Ding, W. X.; Lin, L.; Almagri, A. F.; den Hartog, D. J.; Sarff, J. S.
2012-10-01
Magnetic fluctuations have been long observed in various magnetic confinement configurations. These perturbations may arise naturally from plasma instabilities such as tearing modes and energetic particle driven modes, but they can also be externally imposed by error fields or external magnetic coils. It is commonly observed that large MHD modes lead to plasma locking (no rotation) due to torque produced by eddy currents on the wall, and it is predicted that stochastic field induces flow damping where the radial electric field is reduced. Flow generation is of great importance to fusion plasma research, especially low-torque devices like ITER, as it can act to improve performance. Here we describe new measurements in the MST reversed field pinch (RFP) showing that the coherent interaction of magnetic and particle density fluctuations can produce a turbulent fluctuation-induced kinetic force, which acts to drive intrinsic plasma rotation. Key observations include; (1) the average kinetic force resulting from density fluctuations, ˜ 0.5 N/m^3, is comparable to the intrinsic flow acceleration, and (2) between sawtooth crashes, the spatial distribution of the kinetic force is directed to create a sheared parallel flow profile that is consistent with the measured flow profile in direction and amplitude, suggesting the kinetic force is responsible for intrinsic plasma rotation.
Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.
Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D
2018-04-07
We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.
A radiation tolerant Data link board for the ATLAS Tile Cal upgrade
NASA Astrophysics Data System (ADS)
Åkerstedt, H.; Bohm, C.; Muschter, S.; Silverstein, S.; Valdes, E.
2016-01-01
This paper describes the latest, full-functionality revision of the high-speed data link board developed for the Phase-2 upgrade of ATLAS hadronic Tile Calorimeter. The link board design is highly redundant, with digital functionality implemented in two Xilinx Kintex-7 FPGAs, and two Molex QSFP+ electro-optic modules with uplinks run at 10 Gbps. The FPGAs are remotely configured through two radiation-hard CERN GBTx deserialisers (GBTx), which also provide the LHC-synchronous system clock. The redundant design eliminates virtually all single-point error modes, and a combination of triple-mode redundancy (TMR), internal and external scrubbing will provide adequate protection against radiation-induced errors. The small portion of the FPGA design that cannot be protected by TMR will be the dominant source of radiation-induced errors, even if that area is small.
A Formal Methods Approach to the Analysis of Mode Confusion
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Miller, Steven P.; Potts, James N.; Carreno, Victor A.
2004-01-01
The goal of the new NASA Aviation Safety Program (AvSP) is to reduce the civil aviation fatal accident rate by 80% in ten years and 90% in twenty years. This program is being driven by the accident data with a focus on the most recent history. Pilot error is the most commonly cited cause for fatal accidents (up to 70%) and obviously must be given major consideration in this program. While the greatest source of pilot error is the loss of situation awareness , mode confusion is increasingly becoming a major contributor as well. The January 30, 1995 issue of Aviation Week lists 184 incidents and accidents involving mode awareness including the Bangalore A320 crash 2/14/90, the Strasbourg A320 crash 1/20/92, the Mulhouse-Habsheim A320 crash 6/26/88, and the Toulouse A330 crash 6/30/94. These incidents and accidents reveal that pilots sometimes become confused about what the cockpit automation is doing. Consequently, human factors research is an obvious investment area. However, even a cursory look at the accident data reveals that the mode confusion problem is much deeper than just training deficiencies and a lack of human-oriented design. This is readily acknowledged by human factors experts. It seems that further progress in human factors must come through a deeper scrutiny of the internals of the automation. It is in this arena that formal methods can contribute. Formal methods refers to the use of techniques from logic and discrete mathematics in the specification, design, and verification of computer systems, both hardware and software. The fundamental goal of formal methods is to capture requirements, designs and implementations in a mathematically based model that can be analyzed in a rigorous manner. Research in formal methods is aimed at automating this analysis as much as possible. By capturing the internal behavior of a flight deck in a rigorous and detailed formal model, the dark corners of a design can be analyzed. This paper will explore how formal models and analyses can be used to help eliminate mode confusion from flight deck designs and at the same time increase our confidence in the safety of the implementation. The paper is based upon interim results from a new project involving NASA Langley and Rockwell Collins in applying formal methods to a realistic business jet Flight Guidance System (FGS).
Sliding Mode Control (SMC) of Robot Manipulator via Intelligent Controllers
NASA Astrophysics Data System (ADS)
Kapoor, Neha; Ohri, Jyoti
2017-02-01
Inspite of so much research, key technical problem, naming chattering of conventional, simple and robust SMC is still a challenge to the researchers and hence limits its practical application. However, newly developed soft computing based techniques can provide solution. In order to have advantages of conventional and heuristic soft computing based control techniques, in this paper various commonly used intelligent techniques, neural network, fuzzy logic and adaptive neuro fuzzy inference system (ANFIS) have been combined with sliding mode controller (SMC). For validation, proposed hybrid control schemes have been implemented for tracking a predefined trajectory by robotic manipulator, incorporating structured and unstructured uncertainties in the system. After reviewing numerous papers, all the commonly occurring uncertainties like continuous disturbance, uniform random white noise, static friction like coulomb friction and viscous friction, dynamic friction like Dhal friction and LuGre friction have been inserted in the system. Various performance indices like norm of tracking error, chattering in control input, norm of input torque, disturbance rejection, chattering rejection have been used. Comparative results show that with almost eliminated chattering the intelligent SMC controllers are found to be more efficient over simple SMC. It has also been observed from results that ANFIS based controller has the best tracking performance with the reduced burden on the system. No paper in the literature has found to have all these structured and unstructured uncertainties together for motion control of robotic manipulator.
Kartush, J M
1996-11-01
Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.
Common Cause Failures and Ultra Reliability
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2012-01-01
A common cause failure occurs when several failures have the same origin. Common cause failures are either common event failures, where the cause is a single external event, or common mode failures, where two systems fail in the same way for the same reason. Common mode failures can occur at different times because of a design defect or a repeated external event. Common event failures reduce the reliability of on-line redundant systems but not of systems using off-line spare parts. Common mode failures reduce the dependability of systems using off-line spare parts and on-line redundancy.
SU-E-T-635: Process Mapping of Eye Plaque Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huynh, J; Kim, Y
Purpose: To apply a risk-based assessment and analysis technique (AAPM TG 100) to eye plaque brachytherapy treatment of ocular melanoma. Methods: The role and responsibility of personnel involved in the eye plaque brachytherapy is defined for retinal specialist, radiation oncologist, nurse and medical physicist. The entire procedure was examined carefully. First, major processes were identified and then details for each major process were followed. Results: Seventy-one total potential modes were identified. Eight major processes (corresponding detailed number of modes) are patient consultation (2 modes), pretreatment tumor localization (11), treatment planning (13), seed ordering and calibration (10), eye plaque assembly (10),more » implantation (11), removal (11), and deconstruction (3), respectively. Half of the total modes (36 modes) are related to physicist while physicist is not involved in processes such as during the actual procedure of suturing and removing the plaque. Conclusion: Not only can failure modes arise from physicist-related procedures such as treatment planning and source activity calibration, but it can also exist in more clinical procedures by other medical staff. The improvement of the accurate communication for non-physicist-related clinical procedures could potentially be an approach to prevent human errors. More rigorous physics double check would reduce the error for physicist-related procedures. Eventually, based on this detailed process map, failure mode and effect analysis (FMEA) will identify top tiers of modes by ranking all possible modes with risk priority number (RPN). For those high risk modes, fault tree analysis (FTA) will provide possible preventive action plans.« less
Daverio, Marco; Fino, Giuliana; Luca, Brugnaro; Zaggia, Cristina; Pettenazzo, Andrea; Parpaiola, Antonella; Lago, Paola; Amigoni, Angela
2015-12-01
Errors in are estimated to occur with an incidence of 3.7-16.6% in hospitalized patients. The application of systems for detection of adverse events is becoming a widespread reality in healthcare. Incident reporting (IR) and failure mode and effective analysis (FMEA) are strategies widely used to detect errors, but no studies have combined them in the setting of a pediatric intensive care unit (PICU). The aim of our study was to describe the trend of IR in a PICU and evaluate the effect of FMEA application on the number and severity of the errors detected. With this prospective observational study, we evaluated the frequency IR documented in standard IR forms completed from January 2009 to December 2012 in the PICU of Woman's and Child's Health Department of Padova. On the basis of their severity, errors were classified as: without outcome (55%), with minor outcome (16%), with moderate outcome (10%), and with major outcome (3%); 16% of reported incidents were 'near misses'. We compared the data before and after the introduction of FMEA. Sixty-nine errors were registered, 59 (86%) concerning drug therapy (83% during prescription). Compared to 2009-2010, in 2011-2012, we noted an increase of reported errors (43 vs 26) with a reduction of their severity (21% vs 8% 'near misses' and 65% vs 38% errors with no outcome). With the introduction of FMEA, we obtained an increased awareness in error reporting. Application of these systems will improve the quality of healthcare services. © 2015 John Wiley & Sons Ltd.
Syntactic and semantic errors in radiology reports associated with speech recognition software.
Ringler, Michael D; Goss, Brian C; Bartholmai, Brian J
2017-03-01
Speech recognition software can increase the frequency of errors in radiology reports, which may affect patient care. We retrieved 213,977 speech recognition software-generated reports from 147 different radiologists and proofread them for errors. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods. In all, 20,759 reports (9.7%) contained errors, of which 3992 (1.9%) were material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors ( p < .001). Proportion of errors and fraction of material errors varied significantly among radiologists and between imaging subspecialties ( p < .001). Errors were more common in cross-sectional reports, reports reinterpreting results of outside examinations, and procedural studies (all p < .001). Error rate decreased over time ( p < .001), which suggests that a quality control program with regular feedback may reduce errors.
Impact of toroidal and poloidal mode spectra on the control of non-axisymmetric fields in tokamaks
NASA Astrophysics Data System (ADS)
Lanctot, Matthew J.
2016-10-01
In several tokamaks, non-axisymmetric magnetic field studies show applied n=2 fields can lead to disruptive n=1 locked modes, suggesting nonlinear mode coupling. A multimode plasma response to n=2 fields can be observed in H-mode plasmas, in contrast to the single-mode response found in Ohmic plasmas. These effects highlight a role for n >1 error field correction in disruption avoidance, and identify additional degrees of freedom for 3D field optimization at high plasma pressure. In COMPASS, EAST, and DIII-D Ohmic plasmas, n=2 magnetic reconnection thresholds in otherwise stable discharges are readily accessed at edge safety factors q 3 and low density. Similar to previous studies, the thresholds are correlated with the ``overlap'' field for the dominant linear ideal MHD plasma mode calculated with the IPEC code. The overlap field measures the plasma-mediated coupling of the external field to the resonant field. Remarkably, the critical overlap fields are similar for n=1 and 2 fields with m >nq fields dominating the drive for resonant fields. Complementary experiments in RFX-Mod show fields with m
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, J; Wang, J; P, J
2016-06-15
Purpose: To optimize the clinical processes of radiotherapy and to reduce the radiotherapy risks by implementing the powerful risk management tools of failure mode and effects analysis(FMEA) and PDCA(plan-do-check-act). Methods: A multidiciplinary QA(Quality Assurance) team from our department consisting of oncologists, physicists, dosimetrists, therapists and administrator was established and an entire workflow QA process management using FMEA and PDCA tools was implemented for the whole treatment process. After the primary process tree was created, the failure modes and Risk priority numbers(RPNs) were determined by each member, and then the RPNs were averaged after team discussion. Results: 3 of 9 failuremore » modes with RPN above 100 in the practice were identified in the first PDCA cycle, which were further analyzed to investigate the RPNs: including of patient registration error, prescription error and treating wrong patient. New process controls reduced the occurrence, or detectability scores from the top 3 failure modes. Two important corrective actions reduced the highest RPNs from 300 to 50, and the error rate of radiotherapy decreased remarkably. Conclusion: FMEA and PDCA are helpful in identifying potential problems in the radiotherapy process, which was proven to improve the safety, quality and efficiency of radiation therapy in our department. The implementation of the FMEA approach may improve the understanding of the overall process of radiotherapy while may identify potential flaws in the whole process. Further more, repeating the PDCA cycle can bring us closer to the goal: higher safety and accuracy radiotherapy.« less
AfterQC: automatic filtering, trimming, error removing and quality control for fastq data.
Chen, Shifu; Huang, Tanxiao; Zhou, Yanqing; Han, Yue; Xu, Mingyan; Gu, Jia
2017-03-14
Some applications, especially those clinical applications requiring high accuracy of sequencing data, usually have to face the troubles caused by unavoidable sequencing errors. Several tools have been proposed to profile the sequencing quality, but few of them can quantify or correct the sequencing errors. This unmet requirement motivated us to develop AfterQC, a tool with functions to profile sequencing errors and correct most of them, plus highly automated quality control and data filtering features. Different from most tools, AfterQC analyses the overlapping of paired sequences for pair-end sequencing data. Based on overlapping analysis, AfterQC can detect and cut adapters, and furthermore it gives a novel function to correct wrong bases in the overlapping regions. Another new feature is to detect and visualise sequencing bubbles, which can be commonly found on the flowcell lanes and may raise sequencing errors. Besides normal per cycle quality and base content plotting, AfterQC also provides features like polyX (a long sub-sequence of a same base X) filtering, automatic trimming and K-MER based strand bias profiling. For each single or pair of FastQ files, AfterQC filters out bad reads, detects and eliminates sequencer's bubble effects, trims reads at front and tail, detects the sequencing errors and corrects part of them, and finally outputs clean data and generates HTML reports with interactive figures. AfterQC can run in batch mode with multiprocess support, it can run with a single FastQ file, a single pair of FastQ files (for pair-end sequencing), or a folder for all included FastQ files to be processed automatically. Based on overlapping analysis, AfterQC can estimate the sequencing error rate and profile the error transform distribution. The results of our error profiling tests show that the error distribution is highly platform dependent. Much more than just another new quality control (QC) tool, AfterQC is able to perform quality control, data filtering, error profiling and base correction automatically. Experimental results show that AfterQC can help to eliminate the sequencing errors for pair-end sequencing data to provide much cleaner outputs, and consequently help to reduce the false-positive variants, especially for the low-frequency somatic mutations. While providing rich configurable options, AfterQC can detect and set all the options automatically and require no argument in most cases.
GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA
In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...
High-Frame-Rate Speckle-Tracking Echocardiography.
Joos, Philippe; Poree, Jonathan; Liebgott, Herve; Vray, Didier; Baudet, Mathilde; Faurie, Julia; Tournoux, Francois; Cloutier, Guy; Nicolas, Barbara; Garcia, Damien; Baudet, Mathilde; Tournoux, Francois; Joos, Philippe; Poree, Jonathan; Cloutier, Guy; Liebgott, Herve; Faurie, Julia; Vray, Didier; Nicolas, Barbara; Garcia, Damien
2018-05-01
Conventional echocardiography is the leading modality for noninvasive cardiac imaging. It has been recently illustrated that high-frame-rate echocardiography using diverging waves could improve cardiac assessment. The spatial resolution and contrast associated with this method are commonly improved by coherent compounding of steered beams. However, owing to fast tissue velocities in the myocardium, the summation process of successive diverging waves can lead to destructive interferences if motion compensation (MoCo) is not considered. Coherent compounding methods based on MoCo have demonstrated their potential to provide high-contrast B-mode cardiac images. Ultrafast speckle-tracking echocardiography (STE) based on common speckle-tracking algorithms could substantially benefit from this original approach. In this paper, we applied STE on high-frame-rate B-mode images obtained with a specific MoCo technique to quantify the 2-D motion and tissue velocities of the left ventricle. The method was first validated in vitro and then evaluated in vivo in the four-chamber view of 10 volunteers. High-contrast high-resolution B-mode images were constructed at 500 frames/s. The sequences were generated with a Verasonics scanner and a 2.5-MHz phased array. The 2-D motion was estimated with standard cross correlation combined with three different subpixel adjustment techniques. The estimated in vitro velocity vectors derived from STE were consistent with the expected values, with normalized errors ranging from 4% to 12% in the radial direction and from 10% to 20% in the cross-range direction. Global longitudinal strain of the left ventricle was also obtained from STE in 10 subjects and compared to the results provided by a clinical scanner: group means were not statistically different ( value = 0.33). The in vitro and in vivo results showed that MoCo enables preservation of the myocardial speckles and in turn allows high-frame-rate STE.
ERIC Educational Resources Information Center
El-khateeb, Mahmoud M. A.
2016-01-01
The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…
Lobaugh, Lauren M Y; Martin, Lizabeth D; Schleelein, Laura E; Tyler, Donald C; Litman, Ronald S
2017-09-01
Wake Up Safe is a quality improvement initiative of the Society for Pediatric Anesthesia that contains a deidentified registry of serious adverse events occurring in pediatric anesthesia. The aim of this study was to describe and characterize reported medication errors to find common patterns amenable to preventative strategies. In September 2016, we analyzed approximately 6 years' worth of medication error events reported to Wake Up Safe. Medication errors were classified by: (1) medication category; (2) error type by phase of administration: prescribing, preparation, or administration; (3) bolus or infusion error; (4) provider type and level of training; (5) harm as defined by the National Coordinating Council for Medication Error Reporting and Prevention; and (6) perceived preventability. From 2010 to the time of our data analysis in September 2016, 32 institutions had joined and submitted data on 2087 adverse events during 2,316,635 anesthetics. These reports contained details of 276 medication errors, which comprised the third highest category of events behind cardiac and respiratory related events. Medication errors most commonly involved opioids and sedative/hypnotics. When categorized by phase of handling, 30 events occurred during preparation, 67 during prescribing, and 179 during administration. The most common error type was accidental administration of the wrong dose (N = 84), followed by syringe swap (accidental administration of the wrong syringe, N = 49). Fifty-seven (21%) reported medication errors involved medications prepared as infusions as opposed to 1 time bolus administrations. Medication errors were committed by all types of anesthesia providers, most commonly by attendings. Over 80% of reported medication errors reached the patient and more than half of these events caused patient harm. Fifteen events (5%) required a life sustaining intervention. Nearly all cases (97%) were judged to be either likely or certainly preventable. Our findings characterize the most common types of medication errors in pediatric anesthesia practice and provide guidance on future preventative strategies. Many of these errors will be almost entirely preventable with the use of prefilled medication syringes to avoid accidental ampule swap, bar-coding at the point of medication administration to prevent syringe swap and to confirm the proper dose, and 2-person checking of medication infusions for accuracy.
Optimization of removal function in computer controlled optical surfacing
NASA Astrophysics Data System (ADS)
Chen, Xi; Guo, Peiji; Ren, Jianfeng
2010-10-01
The technical principle of computer controlled optical surfacing (CCOS) and the common method of optimizing removal function that is used in CCOS are introduced in this paper. A new optimizing method time-sharing synthesis of removal function is proposed to solve problems of the removal function being far away from Gaussian type and slow approaching of the removal function error that encountered in the mode of planet motion or translation-rotation. Detailed time-sharing synthesis of using six removal functions is discussed. For a given region on the workpiece, six positions are selected as the centers of the removal function; polishing tool controlled by the executive system of CCOS revolves around each centre to complete a cycle in proper order. The overall removal function obtained by the time-sharing process is the ratio of total material removal in six cycles to time duration of the six cycles, which depends on the arrangement and distribution of the six removal functions. Simulations on the synthesized overall removal functions under two different modes of motion, i.e., planet motion and translation-rotation are performed from which the optimized combination of tool parameters and distribution of time-sharing synthesis removal functions are obtained. The evaluation function when optimizing is determined by an approaching factor which is defined as the ratio of the material removal within the area of half of the polishing tool coverage from the polishing center to the total material removal within the full polishing tool coverage area. After optimization, it is found that the optimized removal function obtained by time-sharing synthesis is closer to the ideal Gaussian type removal function than those by the traditional methods. The time-sharing synthesis method of the removal function provides an efficient way to increase the convergence speed of the surface error in CCOS for the fabrication of aspheric optical surfaces, and to reduce the intermediate- and high-frequency error.
Design and Verification of a Digital Controller for a 2-Piece Hemispherical Resonator Gyroscope
Lee, Jungshin; Yun, Sung Wook; Rhim, Jaewook
2016-01-01
A Hemispherical Resonator Gyro (HRG) is the Coriolis Vibratory Gyro (CVG) that measures rotation angle or angular velocity using Coriolis force acting the vibrating mass. A HRG can be used as a rate gyro or integrating gyro without structural modification by simply changing the control scheme. In this paper, differential control algorithms are designed for a 2-piece HRG. To design a precision controller, the electromechanical modelling and signal processing must be pre-performed accurately. Therefore, the equations of motion for the HRG resonator with switched harmonic excitations are derived with the Duhamel Integral method. Electromechanical modeling of the resonator, electric module and charge amplifier is performed by considering the mode shape of a thin hemispherical shell. Further, signal processing and control algorithms are designed. The multi-flexing scheme of sensing, driving cycles and x, y-axis switching cycles is appropriate for high precision and low maneuverability systems. The differential control scheme is easily capable of rejecting the common mode errors of x, y-axis signals and changing the rate integrating mode on basis of these studies. In the rate gyro mode the controller is composed of Phase-Locked Loop (PLL), amplitude, quadrature and rate control loop. All controllers are designed on basis of a digital PI controller. The signal processing and control algorithms are verified through Matlab/Simulink simulations. Finally, a FPGA and DSP board with these algorithms is verified through experiments. PMID:27104539
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbee, D; McCarthy, A; Galavis, P
Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less
Probabilistic segmentation and intensity estimation for microarray images.
Gottardo, Raphael; Besag, Julian; Stephens, Matthew; Murua, Alejandro
2006-01-01
We describe a probabilistic approach to simultaneous image segmentation and intensity estimation for complementary DNA microarray experiments. The approach overcomes several limitations of existing methods. In particular, it (a) uses a flexible Markov random field approach to segmentation that allows for a wider range of spot shapes than existing methods, including relatively common 'doughnut-shaped' spots; (b) models the image directly as background plus hybridization intensity, and estimates the two quantities simultaneously, avoiding the common logical error that estimates of foreground may be less than those of the corresponding background if the two are estimated separately; and (c) uses a probabilistic modeling approach to simultaneously perform segmentation and intensity estimation, and to compute spot quality measures. We describe two approaches to parameter estimation: a fast algorithm, based on the expectation-maximization and the iterated conditional modes algorithms, and a fully Bayesian framework. These approaches produce comparable results, and both appear to offer some advantages over other methods. We use an HIV experiment to compare our approach to two commercial software products: Spot and Arrayvision.
Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie
2014-01-01
This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).
Mind the Mode: Differences in Paper vs. Web-Based Survey Modes Among Women With Cancer.
Hagan, Teresa L; Belcher, Sarah M; Donovan, Heidi S
2017-09-01
Researchers administering surveys seek to balance data quality, sources of error, and practical concerns when selecting an administration mode. Rarely are decisions about survey administration based on the background of study participants, although socio-demographic characteristics like age, education, and race may contribute to participants' (non)responses. In this study, we describe differences in paper- and web-based surveys administered in a national cancer survivor study of women with a history of cancer to compare the ability of each survey administrative mode to provide quality, generalizable data. We compared paper- and web-based survey data by socio-demographic characteristics of respondents, missing data rates, scores on primary outcome measure, and administrative costs and time using descriptive statistics, tests of mean group differences, and linear regression. Our findings indicate that more potentially vulnerable patients preferred paper questionnaires and that data quality, responses, and costs significantly varied by mode and participants' demographic information. We provide targeted suggestions for researchers conducting survey research to reduce survey error and increase generalizability of study results to the patient population of interest. Researchers must carefully weigh the pros and cons of survey administration modes to ensure a representative sample and high-quality data. Copyright © 2017 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Active Control of Fan-Generated Tone Noise
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.
1995-01-01
This paper reports on an experiment to control the noise radiated from the inlet of a ducted fan using a time domain active adaptive system. The control ,sound source consists of loudspeakers arranged in a ring around the fan duct. The error sensor location is in the fan duct. The purpose of this experiment is to demonstrate that the in-duct error sensor reduces the mode spillover in the far field, thereby increasing the efficiency of the control system. The control system is found to reduce the blade passage frequency tone significantly in the acoustic far field when the mode orders of the noise source and of the control source are the same, when the dominant wave in the duct is a plane wave. The presence of higher order modes in the duct reduces the noise reduction efficiency, particularly near the mode cut-on where the standing wave component is strong, but the control system converges stably. The control system is stable and converges when the first circumferential mode is generated in the duct. The control system is found to reduce the fan noise in the far field on an arc around the fan inlet by as much as 20 dB with none of the sound amplification associated with mode spillover.
Application of failure mode and effect analysis in an assisted reproduction technology laboratory.
Intra, Giulia; Alteri, Alessandra; Corti, Laura; Rabellotti, Elisa; Papaleo, Enrico; Restelli, Liliana; Biondo, Stefania; Garancini, Maria Paola; Candiani, Massimo; Viganò, Paola
2016-08-01
Assisted reproduction technology laboratories have a very high degree of complexity. Mismatches of gametes or embryos can occur, with catastrophic consequences for patients. To minimize the risk of error, a multi-institutional working group applied failure mode and effects analysis (FMEA) to each critical activity/step as a method of risk assessment. This analysis led to the identification of the potential failure modes, together with their causes and effects, using the risk priority number (RPN) scoring system. In total, 11 individual steps and 68 different potential failure modes were identified. The highest ranked failure modes, with an RPN score of 25, encompassed 17 failures and pertained to "patient mismatch" and "biological sample mismatch". The maximum reduction in risk, with RPN reduced from 25 to 5, was mostly related to the introduction of witnessing. The critical failure modes in sample processing were improved by 50% in the RPN by focusing on staff training. Three indicators of FMEA success, based on technical skill, competence and traceability, have been evaluated after FMEA implementation. Witnessing by a second human operator should be introduced in the laboratory to avoid sample mix-ups. These findings confirm that FMEA can effectively reduce errors in assisted reproduction technology laboratories. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Novel spot size converter for coupling standard single mode fibers to SOI waveguides
NASA Astrophysics Data System (ADS)
Sisto, Marco Michele; Fisette, Bruno; Paultre, Jacques-Edmond; Paquet, Alex; Desroches, Yan
2016-03-01
We have designed and numerically simulated a novel spot size converter for coupling standard single mode fibers with 10.4μm mode field diameter to 500nm × 220nm SOI waveguides. Simulations based on the eigenmode expansion method show a coupling loss of 0.4dB at 1550nm for the TE mode at perfect alignment. The alignment tolerance on the plane normal to the fiber axis is evaluated at +/-2.2μm for <=1dB excess loss, which is comparable to the alignment tolerance between two butt-coupled standard single mode fibers. The converter is based on a cross-like arrangement of SiOxNy waveguides immersed in a 12μm-thick SiO2 cladding region deposited on top of the SOI chip. The waveguides are designed to collectively support a single degenerate mode for TE and TM polarizations. This guided mode features a large overlap to the LP01 mode of standard telecom fibers. Along the spot size converter length (450μm), the mode is first gradually confined in a single SiOxNy waveguide by tapering its width. Then, the mode is adiabatically coupled to a SOI waveguide underneath the structure through a SOI inverted taper. The shapes of SiOxNy and SOI tapers are optimized to minimize coupling loss and structure length, and to ensure adiabatic mode evolution along the structure, thus improving the design robustness to fabrication process errors. A tolerance analysis based on conservative microfabrication capabilities suggests that coupling loss penalty from fabrication errors can be maintained below 0.3dB. The proposed spot size converter is fully compliant to industry standard microfabrication processes available at INO.
Sensitivity of STIS First-OrderMedium Resolution Modes
NASA Astrophysics Data System (ADS)
Proffitt, Charles R.
2006-07-01
The sensitivities for STIS first-order medium resolution modes were redetermined usingon-orbit observations of the standard DA white dwarfs G 191-B2B, GD 71, and GD 153.We review the procedures and assumptions used to derive the adopted throughputs, and discuss the remaining errors and uncertainties.
On the application of frequency selective common mode feedback for multifrequency EIT.
Langlois, Peter J; Wu, Yu; Bayford, Richard H; Demosthenous, Andreas
2015-06-01
Common mode voltages are frequently a problem in electrical impedance tomography (EIT) and other bioimpedance applications. To reduce their amplitude common mode feedback is employed. Formalised analyses of both current and voltage feedback is presented in this paper for current drives. Common mode effects due to imbalances caused by the current drives, the electrode connections to the body load and the introduction of the body impedance to ground are considered. Frequency selective narrowband common mode feedback previously proposed to provide feedback stability is examined. As a step towards multifrequency applications the use of narrowband feedback is experimentally demonstrated for two simultaneous current drives. Measured results using standard available components show a reduction of 62 dB for current feedback and 31 dB for voltage feedback. Frequencies ranged from 50 kHz to 1 MHz.
Khodaei, Kazem; Mohammadi, Abbas; Badri, Neda
2017-10-01
The purpose of this study was to compare the effect of assisted, resisted and common plyometric training modes to enhance sprint and agility performance. Thirty active young males (age 20.67±1.12, height 174.83±4.69, weight 63.45±7.51) volunteered to participate in this study that 24 completed testing. The participants were randomly assigned into different groups: assisted, resisted and common plyometric exercises groups. Plyometric training involved three sessions per week for 4 weeks. The volume load of plyometric training modes was equated between the groups. The posttest was performed after 48 hours of the last training session. Between-group differences were analyzed with the ANCOVA and LSD post-hoc tests, and within-group differences were analyzed by a paired t-test. The findings of the present study indicated that 0-10-m, 20-30-m sprint time and the Illinois Agility Test time significantly decreased in the assisted and resisted plyometrics modes compared to the common plyometric training mode (P≤0.05). Also, the 0-10-m, 0-30-m sprint time and agility T-test time was significantly reduced with resisted plyometrics modes compared to the assisted and common plyometric modes (P≤0.05). There was no significant difference in the 10-20-m sprint time among the three plyometric training modes. The results of this study demonstrated that assisted and resisted plyometrics modes with elastic bands were effective methods to improve sprint and agility performance than common plyometric training in active males. Also, the resisted plyometrics mode was superior than the assisted plyometrics mode to improving sprint and agility tasks.
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.; Remington, Paul J.; Walker, Bruce E.
2003-01-01
A test program to demonstrate simplification of Active Noise Control (ANC) systems relative to standard techniques was performed on the NASA Glenn Active Noise Control Fan from May through September 2001. The target mode was the m = 2 circumferential mode generated by the rotor-stator interaction at 2BPF. Seven radials (combined inlet and exhaust) were present at this condition. Several different error-sensing strategies were implemented. Integration of the error-sensors with passive treatment was investigated. These were: (i) an in-duct linear axial array, (ii) an induct steering array, (iii) a pylon-mounted array, and (iv) a near-field boom array. The effect of incorporating passive treatment was investigated as well as reducing the actuator count. These simplified systems were compared to a fully ANC specified system. Modal data acquired using the Rotating Rake are presented for a range of corrected fan rpm. Simplified control has been demonstrated to be possible but requires a well-known and dominant mode signature. The documented results here in are part III of a three-part series of reports with the same base title. Part I and II document the control system and error-sensing design and implementation.
Simulations of a PSD Plastic Neutron Collar for Assaying Fresh Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hausladen, Paul; Newby, Jason; McElroy, Robert Dennis
The potential performance of a notional active coincidence collar for assaying uranium fuel based on segmented detectors constructed from the new PSD plastic fast organic scintillator with pulse shape discrimination capability was investigated in simulation. Like the International Atomic Energy Agency's present Uranium Neutron Collar for LEU (UNCL), the PSD plastic collar would also function by stimulating fission in the 235U content of the fuel with a moderated 241Am/Li neutron source and detecting instances of induced fission via neutron coincidence counting. In contrast to the moderated detectors of the UNCL, the fast time scale of detection in the scintillator eliminatesmore » statistical errors due to accidental coincidences that limit the performance of the UNCL. However, the potential to detect a single neutron multiple times historically has been one of the properties of organic scintillator detectors that has prevented their adoption for international safeguards applications. Consequently, as part of the analysis of simulated data, a method was developed by which true neutron-neutron coincidences can be distinguished from inter-detector scatter that takes advantage of the position and timing resolution of segmented detectors. Then, the performance of the notional simulated coincidence collar was evaluated for assaying a variety of fresh fuels, including some containing burnable poisons and partial defects. In these simulations, particular attention was paid to the analysis of fast mode measurements. In fast mode, a Cd liner is placed inside the collar to shield the fuel from the interrogating source and detector moderators, thereby eliminating the thermal neutron flux that is most sensitive to the presence of burnable poisons that are ubiquitous in modern nuclear fuels. The simulations indicate that the predicted precision of fast mode measurements is similar to what can be achieved by the present UNCL in thermal mode. For example, the statistical accuracy of a ten-minute measurement of fission coincidences collected in fast mode will be approximately 1% for most fuels of interest, yielding a ~1.4% error after subtraction of a five minute measurement of the spontaneous fissions from 238U in the fuel, a ~2% error in analyzed linear density after accounting for the slope of the calibration curve, and a ~2.9% total error after addition of an assumed systematic error of 2%.« less
Integrated five-port non-blocking optical router based on mode-selective property
NASA Astrophysics Data System (ADS)
Jia, Hao; Zhou, Ting; Fu, Xin; Ding, Jianfeng; Zhang, Lei; Yang, Lin
2018-05-01
In this paper, we propose and demonstrate a five-port optical router based on mode-selective property. It utilizes different combinations of four spatial modes at input and output ports as labels to distinguish its 20 routing paths. It can direct signals from the source port to the destination port intelligently without power consumption and additional switching time to realize various path steering. The proposed architecture is constructed by asymmetric directional coupler based mode-multiplexers/de-multiplexers, multimode interference based waveguide crossings and single-mode interconnect waveguides. The broad optical bandwidths of these constituents make the device suitable to combine with wavelength division multiplexing signal transmission, which can effectively increase the data throughput. Measurement results show that the insertion loss of its 20 routing paths are lower than 8.5 dB and the optical signal-to-noise ratios are larger than 16.3 dB at 1525-1565 nm. To characterize its routing functionality, a 40-Gbps data transmission with bit-error-rate (BER) measurement is implemented. The power penalties for the error-free switching (BER<10-9) are 1.0 dB and 0.8 dB at 1545 nm and 1565 nm, respectively.
On decentralized adaptive full-order sliding mode control of multiple UAVs.
Xiang, Xianbo; Liu, Chao; Su, Housheng; Zhang, Qin
2017-11-01
In this study, a novel decentralized adaptive full-order sliding mode control framework is proposed for the robust synchronized formation motion of multiple unmanned aerial vehicles (UAVs) subject to system uncertainty. First, a full-order sliding mode surface in a decentralized manner is designed to incorporate both the individual position tracking error and the synchronized formation error while the UAV group is engaged in building a certain desired geometric pattern in three dimensional space. Second, a decentralized virtual plant controller is constructed which allows the embedded low-pass filter to attain the chattering free property of the sliding mode controller. In addition, robust adaptive technique is integrated in the decentralized chattering free sliding control design in order to handle unknown bounded uncertainties, without requirements for assuming a priori knowledge of bounds on the system uncertainties as stated in conventional chattering free control methods. Subsequently, system robustness as well as stability of the decentralized full-order sliding mode control of multiple UAVs is synthesized. Numerical simulation results illustrate the effectiveness of the proposed control framework to achieve robust 3D formation flight of the multi-UAV system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A Generalized Framework for Reduced-Order Modeling of a Wind Turbine Wake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Nicholas; Viggiano, Bianca; Calaf, Marc
A reduced-order model for a wind turbine wake is sought from large eddy simulation data. Fluctuating velocity fields are combined in the correlation tensor to form the kernel of the proper orthogonal decomposition (POD). Proper orthogonal decomposition modes resulting from the decomposition represent the spatially coherent turbulence structures in the wind turbine wake; eigenvalues delineate the relative amount of turbulent kinetic energy associated with each mode. Back-projecting the POD modes onto the velocity snapshots produces dynamic coefficients that express the amplitude of each mode in time. A reduced-order model of the wind turbine wake (wakeROM) is defined through a seriesmore » of polynomial parameters that quantify mode interaction and the evolution of each POD mode coefficients. The resulting system of ordinary differential equations models the wind turbine wake composed only of the large-scale turbulent dynamics identified by the POD. Tikhonov regularization is used to recalibrate the dynamical system by adding additional constraints to the minimization seeking polynomial parameters, reducing error in the modeled mode coefficients. The wakeROM is periodically reinitialized with new initial conditions found by relating the incoming turbulent velocity to the POD mode coefficients through a series of open-loop transfer functions. The wakeROM reproduces mode coefficients to within 25.2%, quantified through the normalized root-mean-square error. A high-level view of the modeling approach is provided as a platform to discuss promising research directions, alternate processes that could benefit stability and efficiency, and desired extensions of the wakeROM.« less
NASA Astrophysics Data System (ADS)
Tsai, Nan-Chyuan; Sue, Chung-Yang
2010-02-01
Owing to the imposed but undesired accelerations such as quadrature error and cross-axis perturbation, the micro-machined gyroscope would not be unconditionally retained at resonant mode. Once the preset resonance is not sustained, the performance of the micro-gyroscope is accordingly degraded. In this article, a direct model reference adaptive control loop which is integrated with a modified disturbance estimating observer (MDEO) is proposed to guarantee the resonant oscillations at drive mode and counterbalance the undesired disturbance mainly caused by quadrature error and cross-axis perturbation. The parameters of controller are on-line innovated by the dynamic error between the MDEO output and expected response. In addition, Lyapunov stability theory is employed to examine the stability of the closed-loop control system. Finally, the efficacy of numerical evaluation on the exerted time-varying angular rate, which is to be detected and measured by the gyroscope, is verified by intensive simulations.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina
2017-07-01
A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.
NASA Technical Reports Server (NTRS)
Belcastro, C. M.
1984-01-01
Advanced composite aircraft designs include fault-tolerant computer-based digital control systems with thigh reliability requirements for adverse as well as optimum operating environments. Since aircraft penetrate intense electromagnetic fields during thunderstorms, onboard computer systems maya be subjected to field-induced transient voltages and currents resulting in functional error modes which are collectively referred to as digital system upset. A methodology was developed for assessing the upset susceptibility of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general-purpose microprocessor were studied via tests which involved the random input of analog transients which model lightning-induced signals onto interface lines of an 8080-based microcomputer from which upset error data were recorded. The application of Markov modeling to upset susceptibility estimation is discussed and a stochastic model development.
NASA Astrophysics Data System (ADS)
Guchhait, Shyamal; Banerjee, Biswanath
2018-04-01
In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.
High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link
NASA Technical Reports Server (NTRS)
Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli
2016-01-01
We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.
Study on the stability and reliability of Clinotron at Y-band
NASA Astrophysics Data System (ADS)
Li, Shuang; Wang, Jianguo; Chen, Zaigao; Wang, Guangqiang; Wang, Dongyang; Teng, Yan
2017-11-01
To improve the stability and reliability of Clinotron at the Y-band, some key issues are researched, such as the synchronous operating mode, the heat accumulation on the slow-wave structure, and the errors in micro-fabrication. By analyzing the dispersion relationship, the working mode is determined as the TM10 mode. The problem of heat dissipation on a comb is researched to make a trade-off on the choice of suitable working conditions, making sure that the safety and efficiency of the device are guaranteed simultaneously. The study on the effect of tolerance on device's performance is also conducted to determine the acceptable error during micro-fabrication. The validity of the device and the cost for fabrication are both taken into consideration. At last, the performance of Clinotron under the optimized conditions demonstrates that it can work steadily at 315.89 GHz and the output power is about 12 W, showing advanced stability and reliability.
Sayler, Elaine; Eldredge-Hindy, Harriet; Dinome, Jessie; Lockamy, Virginia; Harrison, Amy S
2015-01-01
The planning procedure for Valencia and Leipzig surface applicators (VLSAs) (Nucletron, Veenendaal, The Netherlands) differs substantially from CT-based planning; the unfamiliarity could lead to significant errors. This study applies failure modes and effects analysis (FMEA) to high-dose-rate (HDR) skin brachytherapy using VLSAs to ensure safety and quality. A multidisciplinary team created a protocol for HDR VLSA skin treatments and applied FMEA. Failure modes were identified and scored by severity, occurrence, and detectability. The clinical procedure was then revised to address high-scoring process nodes. Several key components were added to the protocol to minimize risk probability numbers. (1) Diagnosis, prescription, applicator selection, and setup are reviewed at weekly quality assurance rounds. Peer review reduces the likelihood of an inappropriate treatment regime. (2) A template for HDR skin treatments was established in the clinic's electronic medical record system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planner as well as increases the detectability of an error. (3) A screen check was implemented during the second check to increase detectability of an error. (4) To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display, facilitating data entry and verification. (5) VLSAs are color coded and labeled to match the electronic medical record prescriptions, simplifying in-room selection and verification. Multidisciplinary planning and FMEA increased detectability and reduced error probability during VLSA HDR brachytherapy. This clinical model may be useful to institutions implementing similar procedures. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Muroi, Maki; Shen, Jay J; Angosta, Alona
2017-02-01
Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.
Tsukayama, Hiroshi
2008-01-01
Evidence-based approach on the safety of acupuncture had been lagging behind both in the West and the East, but reliable data based on some prospective surveys were published after the late 1990s. In the present article, we, focusing on ‘Japanese acupuncture’, review relevant case reports and prospective surveys on adverse events in Japan, assess the safety of acupuncture practice in this country, and suggest a strategy for reducing the therapists’ error. Based on the prospective surveys, it seems reasonable to suppose that serious adverse events are rare in standard practice by adequately trained acupuncturists, regardless of countries or modes of practice. Almost all of adverse reactions commonly seen in acupuncture practice—such as fatigue, drowsiness, aggravation, minor bleeding, pain on insertion and subcutaneous hemorrhage—are mild and transient, although we should be cautious of secondary injury following drowsiness and needle fainting. After demonstrating that acupuncture is inherently safe, we have been focusing on how to reduce the risk of negligence in Japan, as well as educating acupuncturists more about safe depth of insertion and infection control. Incident reporting and feedback system is a useful strategy for reducing therapist errors such as forgotten needles. For the benefit of acupuncture patients in Japan, it is important to establish mandatory postgraduate clinical training and continued education system. PMID:18955234
Algorithm for ion beam figuring of low-gradient mirrors.
Jiao, Changjun; Li, Shengyi; Xie, Xuhui
2009-07-20
Ion beam figuring technology for low-gradient mirrors is discussed. Ion beam figuring is a noncontact machining technique in which a beam of high-energy ions is directed toward a target workpiece to remove material in a predetermined and controlled fashion. Owing to this noncontact mode of material removal, problems associated with tool wear and edge effects, which are common in conventional contact polishing processes, are avoided. Based on the Bayesian principle, an iterative dwell time algorithm for planar mirrors is deduced from the computer-controlled optical surfacing (CCOS) principle. With the properties of the removal function, the shaping process of low-gradient mirrors can be approximated by the linear model for planar mirrors. With these discussions, the error surface figuring technology for low-gradient mirrors with a linear path is set up. With the near-Gaussian property of the removal function, the figuring process with a spiral path can be described by the conventional linear CCOS principle, and a Bayesian-based iterative algorithm can be used to deconvolute the dwell time. Moreover, the selection criterion of the spiral parameter is given. Ion beam figuring technology with a spiral scan path based on these methods can be used to figure mirrors with non-axis-symmetrical errors. Experiments on SiC chemical vapor deposition planar and Zerodur paraboloid samples are made, and the final surface errors are all below 1/100 lambda.
Feedforward Equalizers for MDM-WDM in Multimode Fiber Interconnects
NASA Astrophysics Data System (ADS)
Masunda, Tendai; Amphawan, Angela
2018-04-01
In this paper, we present new tap configurations of a feedforward equalizer to mitigate mode coupling in a 60-Gbps 18-channel mode-wavelength division multiplexing system in a 2.5-km-long multimode fiber. The performance of the equalization is measured through analyses on eye diagrams, power coupling coefficients and bit-error rates.
47 CFR 80.1125 - Search and rescue coordinating communications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... station involved may impose silence on stations which interfere with that traffic. This instruction may be... “silence, m'aider”; (2) In narrow-band direct-printing telegraphy normally using forward-error correcting mode, the signal SILENCE MAYDAY. However, the ARQ mode may be used when it is advantageous to do so. (f...
Methods and circuitry for reconfigurable SEU/SET tolerance
NASA Technical Reports Server (NTRS)
Shuler, Jr., Robert L. (Inventor)
2010-01-01
A device is disclosed in one embodiment that has multiple identical sets of programmable functional elements, programmable routing resources, and majority voters that correct errors. The voters accept a mode input for a redundancy mode and a split mode. In the redundancy mode, the programmable functional elements are identical and are programmed identically so the voters produce an output corresponding to the majority of inputs that agree. In a split mode, each voter selects a particular programmable functional element output as the output of the voter. Therefore, in the split mode, the programmable functional elements can perform different functions, operate independently, and/or be connected together to process different parts of the same problem.
Frequency encoded auditory display of the critical tracking task
NASA Technical Reports Server (NTRS)
Stevenson, J.
1984-01-01
The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.
Heralded creation of photonic qudits from parametric down-conversion using linear optics
NASA Astrophysics Data System (ADS)
Yoshikawa, Jun-ichi; Bergmann, Marcel; van Loock, Peter; Fuwa, Maria; Okada, Masanori; Takase, Kan; Toyama, Takeshi; Makino, Kenzo; Takeda, Shuntaro; Furusawa, Akira
2018-05-01
We propose an experimental scheme to generate, in a heralded fashion, arbitrary quantum superpositions of two-mode optical states with a fixed total photon number n based on weakly squeezed two-mode squeezed state resources (obtained via weak parametric down-conversion), linear optics, and photon detection. Arbitrary d -level (qudit) states can be created this way where d =n +1 . Furthermore, we experimentally demonstrate our scheme for n =2 . The resulting qutrit states are characterized via optical homodyne tomography. We also discuss possible extensions to more than two modes concluding that, in general, our approach ceases to work in this case. For illustration and with regards to possible applications, we explicitly calculate a few examples such as NOON states and logical qubit states for quantum error correction. In particular, our approach enables one to construct bosonic qubit error-correction codes against amplitude damping (photon loss) with a typical suppression of √{n }-1 losses and spanned by two logical codewords that each correspond to an n -photon superposition for two bosonic modes.
Methods for multiple-telescope beam imaging and guiding in the near-infrared
NASA Astrophysics Data System (ADS)
Anugu, N.; Amorim, A.; Gordo, P.; Eisenhauer, F.; Pfuhl, O.; Haug, M.; Wieprecht, E.; Wiezorrek, E.; Lima, J.; Perrin, G.; Brandner, W.; Straubmeier, C.; Le Bouquin, J.-B.; Garcia, P. J. V.
2018-05-01
Atmospheric turbulence and precise measurement of the astrometric baseline vector between any two telescopes are two major challenges in implementing phase-referenced interferometric astrometry and imaging. They limit the performance of a fibre-fed interferometer by degrading the instrument sensitivity and the precision of astrometric measurements and by introducing image reconstruction errors due to inaccurate phases. A multiple-beam acquisition and guiding camera was built to meet these challenges for a recently commissioned four-beam combiner instrument, GRAVITY, at the European Southern Observatory Very Large Telescope Interferometer. For each telescope beam, it measures (a) field tip-tilts by imaging stars in the sky, (b) telescope pupil shifts by imaging pupil reference laser beacons installed on each telescope using a 2 × 2 lenslet and (c) higher-order aberrations using a 9 × 9 Shack-Hartmann. The telescope pupils are imaged to provide visual monitoring while observing. These measurements enable active field and pupil guiding by actuating a train of tip-tilt mirrors placed in the pupil and field planes, respectively. The Shack-Hartmann measured quasi-static aberrations are used to focus the auxiliary telescopes and allow the possibility of correcting the non-common path errors between the adaptive optics systems of the unit telescopes and GRAVITY. The guiding stabilizes the light injection into single-mode fibres, increasing sensitivity and reducing the astrometric and image reconstruction errors. The beam guiding enables us to achieve an astrometric error of less than 50 μas. Here, we report on the data reduction methods and laboratory tests of the multiple-beam acquisition and guiding camera and its performance on-sky.
Prediction of human errors by maladaptive changes in event-related brain networks.
Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus
2008-04-22
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.
Prediction of human errors by maladaptive changes in event-related brain networks
Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus
2008-01-01
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123
Development of a Low-Noise High Common-Mode-Rejection Instrumentation Amplifier. M.S. Thesis
NASA Technical Reports Server (NTRS)
Rush, Kenneth; Blalock, T. V.; Kennedy, E. J.
1975-01-01
Several previously used instrumentation amplifier circuits were examined to find limitations and possibilities for improvement. One general configuration is analyzed in detail, and methods for improvement are enumerated. An improved amplifier circuit is described and analyzed with respect to common mode rejection and noise. Experimental data are presented showing good agreement between calculated and measured common mode rejection ratio and equivalent noise resistance. The amplifier is shown to be capable of common mode rejection in excess of 140 db for a trimmed circuit at frequencies below 100 Hz and equivalent white noise below 3.0 nv/square root of Hz above 1000 Hz.
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
NASA Technical Reports Server (NTRS)
Spector, E.; LeBlanc, A.; Shackelford, L.
1995-01-01
This study reports on the short-term in vivo precision and absolute measurements of three combinations of whole-body scan modes and analysis software using a Hologic QDR 2000 dual-energy X-ray densitometer. A group of 21 normal, healthy volunteers (11 male and 10 female) were scanned six times, receiving one pencil-beam and one array whole-body scan on three occasions approximately 1 week apart. The following combinations of scan modes and analysis software were used: pencil-beam scans analyzed with Hologic's standard whole-body software (PB scans); the same pencil-beam analyzed with Hologic's newer "enhanced" software (EPB scans); and array scans analyzed with the enhanced software (EA scans). Precision values (% coefficient of variation, %CV) were calculated for whole-body and regional bone mineral content (BMC), bone mineral density (BMD), fat mass, lean mass, %fat and total mass. In general, there was no significant difference among the three scan types with respect to short-term precision of BMD and only slight differences in the precision of BMC. Precision of BMC and BMD for all three scan types was excellent: < 1% CV for whole-body values, with most regional values in the 1%-2% range. Pencil-beam scans demonstrated significantly better soft tissue precision than did array scans. Precision errors for whole-body lean mass were: 0.9% (PB), 1.1% (EPB) and 1.9% (EA). Precision errors for whole-body fat mass were: 1.7% (PB), 2.4% (EPB) and 5.6% (EA). EPB precision errors were slightly higher than PB precision errors for lean, fat and %fat measurements of all regions except the head, although these differences were significant only for the fat and % fat of the arms and legs. In addition EPB precision values exhibited greater individual variability than PB precision values. Finally, absolute values of bone and soft tissue were compared among the three combinations of scan and analysis modes. BMC, BMD, fat mass, %fat and lean mass were significantly different between PB scans and either of the EPB or EA scans. Differences were as large as 20%-25% for certain regional fat and BMD measurements. Additional work may be needed to examine the relative accuracy of the scan mode/software combinations and to identify reasons for the differences in soft tissue precision with the array whole-body scan mode.
Critical error fields for locked mode instability in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Haye, R.J.; Fitzpatrick, R.; Hender, T.C.
1992-07-01
Otherwise stable discharges can become nonlinearly unstable to disruptive locked modes when subjected to a resonant {ital m}=2, {ital n}=1 error field from irregular poloidal field coils, as in DIII-D (Nucl. Fusion {bold 31}, 875 (1991)), or from resonant magnetic perturbation coils as in COMPASS-C ({ital Proceedings} {ital of} {ital the} 18{ital th} {ital European} {ital Conference} {ital on} {ital Controlled} {ital Fusion} {ital and} {ital Plasma} {ital Physics}, Berlin (EPS, Petit-Lancy, Switzerland, 1991), Vol. 15C, Part II, p. 61). Experiments in Ohmically heated deuterium discharges with {ital q}{approx}3.5, {ital {bar n}} {approx} 2 {times} 10{sup 19} m{sup {minus}3} andmore » {ital B}{sub {ital T}} {approx} 1.2 T show that a much larger relative error field ({ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 1 {times} 10{sup {minus}3}) is required to produce a locked mode in the small, rapidly rotating plasma of COMPASS-C ({ital R}{sub 0} = 0.56 m, {ital f}{approx}13 kHz) than in the medium-sized plasmas of DIII-D ({ital R}{sub 0} = 1.67 m, {ital f}{approx}1.6 kHz), where the critical relative error field is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}4}. This dependence of the threshold for instability is explained by a nonlinear tearing theory of the interaction of resonant magnetic perturbations with rotating plasmas that predicts the critical error field scales as ({ital fR}{sub 0}/{ital B}{sub {ital T}}){sup 4/3}{ital {bar n}}{sup 2/3}. Extrapolating from existing devices, the predicted critical field for locked modes in Ohmic discharges on the International Thermonuclear Experimental Reactor (ITER) (Nucl. Fusion {bold 30}, 1183 (1990)) ({ital f}=0.17 kHz, {ital R}{sub 0} = 6.0 m, {ital B}{sub {ital T}} = 4.9 T, {ital {bar n}} = 2 {times} 10{sup 19} m{sup {minus}3}) is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}5}.« less
Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm
Huang, C. -K.; Zeng, Y.; Wang, Y.; ...
2016-10-01
The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less
Differential Optical Synthetic Aperture Radar
Stappaerts, Eddy A.
2005-04-12
A new differential technique for forming optical images using a synthetic aperture is introduced. This differential technique utilizes a single aperture to obtain unique (N) phases that can be processed to produce a synthetic aperture image at points along a trajectory. This is accomplished by dividing the aperture into two equal "subapertures", each having a width that is less than the actual aperture, along the direction of flight. As the platform flies along a given trajectory, a source illuminates objects and the two subapertures are configured to collect return signals. The techniques of the invention is designed to cancel common-mode errors, trajectory deviations from a straight line, and laser phase noise to provide the set of resultant (N) phases that can produce an image having a spatial resolution corresponding to a synthetic aperture.
Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, C. -K.; Zeng, Y.; Wang, Y.
The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less
Dynamically tuned vibratory micromechanical gyroscope accelerometer
NASA Astrophysics Data System (ADS)
Lee, Byeungleul; Oh, Yong-Soo; Park, Kyu-Yeon; Ha, Byeoungju; Ko, Younil; Kim, Jeong-gon; Kang, Seokjin; Choi, Sangon; Song, Ci M.
1997-11-01
A comb driving vibratory micro-gyroscope, which utilizes the dynamically tunable resonant modes for a higher rate- sensitivity without an accelerational error, has been developed and analyzed. The surface micromachining technology is used to fabricate the gyroscope having a vibrating part of 400 X 600 micrometers with 6 mask process, and the poly-silicon structural layer is deposited by LPCVD at 625 degrees C. The gyroscope and the interface electronics housed in a hermetically sealed vacuum package for low vibrational damping condition. This gyroscope is designed to be driven in parallel to the substrate by electrostatic forces and subject to coriolis forces along vertically, with a folded beam structure. In this scheme, the resonant frequency of the driving mode is located below than that of the sensing mode, so it is possible to adjust the sensing mode with a negative stiffness effect by applying inter-plate voltage to tune the vibration modes for a higher rate-sensitivity. Unfortunately, this micromechanical vibratory gyroscope is also sensitive to vertical acceleration force, especially in the case of a low stiffness of the vibrating structure for detecting a very small coriolis force. In this study, we distinguished the rate output and the accelerational error by phase sensitivity synchronous demodulator and devised a feedback loop to maintain resonant frequency of the vertical sensing mode by varying the inter-plate tuning voltage according to the accelerational output. Therefore, this gyroscope has a high rate-sensitivity without an acceleration error, and also can be used for a resonant accelerometer. This gyroscope was tested on the rotational rate table at the separation of 50(Hz) resonant frequencies by dynamically tuning feedback loop. Also self-sustained oscillating loop is used to apply dc 2(V) + ac 30(mVpk) driving voltage to the drive electrodes. The characteristics of the gyroscope at 0.1 (deg/sec) resolution, 50 (Hz) bandwidth, and 1.3 (mV/deg/sec) sensitivity.
NASA Astrophysics Data System (ADS)
Akbarashrafi, F.; Al-Attar, D.; Deuss, A.; Trampert, J.; Valentine, A. P.
2018-04-01
Seismic free oscillations, or normal modes, provide a convenient tool to calculate low-frequency seismograms in heterogeneous Earth models. A procedure called `full mode coupling' allows the seismic response of the Earth to be computed. However, in order to be theoretically exact, such calculations must involve an infinite set of modes. In practice, only a finite subset of modes can be used, introducing an error into the seismograms. By systematically increasing the number of modes beyond the highest frequency of interest in the seismograms, we investigate the convergence of full-coupling calculations. As a rule-of-thumb, it is necessary to couple modes 1-2 mHz above the highest frequency of interest, although results depend upon the details of the Earth model. This is significantly higher than has previously been assumed. Observations of free oscillations also provide important constraints on the heterogeneous structure of the Earth. Historically, this inference problem has been addressed by the measurement and interpretation of splitting functions. These can be seen as secondary data extracted from low frequency seismograms. The measurement step necessitates the calculation of synthetic seismograms, but current implementations rely on approximations referred to as self- or group-coupling and do not use fully accurate seismograms. We therefore also investigate whether a systematic error might be present in currently published splitting functions. We find no evidence for any systematic bias, but published uncertainties must be doubled to properly account for the errors due to theoretical omissions and regularization in the measurement process. Correspondingly, uncertainties in results derived from splitting functions must also be increased. As is well known, density has only a weak signal in low-frequency seismograms. Our results suggest this signal is of similar scale to the true uncertainties associated with currently published splitting functions. Thus, it seems that great care must be taken in any attempt to robustly infer details of Earth's density structure using current splitting functions.
Doytchev, Doytchin E; Szwillus, Gerd
2009-11-01
Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.
Atom-counting in High Resolution Electron Microscopy:TEM or STEM - That's the question.
Gonnissen, J; De Backer, A; den Dekker, A J; Sijbers, J; Van Aert, S
2017-03-01
In this work, a recently developed quantitative approach based on the principles of detection theory is used in order to determine the possibilities and limitations of High Resolution Scanning Transmission Electron Microscopy (HR STEM) and HR TEM for atom-counting. So far, HR STEM has been shown to be an appropriate imaging mode to count the number of atoms in a projected atomic column. Recently, it has been demonstrated that HR TEM, when using negative spherical aberration imaging, is suitable for atom-counting as well. The capabilities of both imaging techniques are investigated and compared using the probability of error as a criterion. It is shown that for the same incoming electron dose, HR STEM outperforms HR TEM under common practice standards, i.e. when the decision is based on the probability function of the peak intensities in HR TEM and of the scattering cross-sections in HR STEM. If the atom-counting decision is based on the joint probability function of the image pixel values, the dependence of all image pixel intensities as a function of thickness should be known accurately. Under this assumption, the probability of error may decrease significantly for atom-counting in HR TEM and may, in theory, become lower as compared to HR STEM under the predicted optimal experimental settings. However, the commonly used standard for atom-counting in HR STEM leads to a high performance and has been shown to work in practice. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Zhangjun; Liu, Zhishen; Liu, Liping; Wu, Songhua; Liu, Bingyi; Li, Zhigang; Chu, Xinzhao
2010-12-20
An incoherent Doppler wind lidar based on iodine edge filters has been developed at the Ocean University of China for remote measurements of atmospheric wind fields. The lidar is compact enough to fit in a minivan for mobile deployment. With its sophisticated and user-friendly data acquisition and analysis system (DAAS), this lidar has made a variety of line-of-sight (LOS) wind measurements in different operational modes. Through carefully developed data retrieval procedures, various wind products are provided by the lidar, including wind profile, LOS wind velocities in plan position indicator (PPI) and range height indicator (RHI) modes, and sea surface wind. Data are processed and displayed in real time, and continuous wind measurements have been demonstrated for as many as 16 days. Full-azimuth-scanned wind measurements in PPI mode and full-elevation-scanned wind measurements in RHI mode have been achieved with this lidar. The detection range of LOS wind velocity PPI and RHI reaches 8-10 km at night and 6-8 km during daytime with range resolution of 10 m and temporal resolution of 3 min. In this paper, we introduce the DAAS architecture and describe the data retrieval methods for various operation modes. We present the measurement procedures and results of LOS wind velocities in PPI and RHI scans along with wind profiles obtained by Doppler beam swing. The sea surface wind measured for the sailing competition during the 2008 Beijing Olympics is also presented. The precision and accuracy of wind measurements are estimated through analysis of the random errors associated with photon noise and the systematic errors introduced by the assumptions made in data retrieval. The three assumptions of horizontal homogeneity of atmosphere, close-to-zero vertical wind, and uniform sensitivity are made in order to experimentally determine the zero wind ratio and the measurement sensitivity, which are important factors in LOS wind retrieval. Deviations may occur under certain meteorological conditions, leading to bias in these situations. Based on the error analyses and measurement results, we point out the application ranges of this Doppler lidar and propose several paths for future improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nair, Ranjith
2011-09-15
We consider the problem of distinguishing, with minimum probability of error, two optical beam-splitter channels with unequal complex-valued reflectivities using general quantum probe states entangled over M signal and M' idler mode pairs of which the signal modes are bounced off the beam splitter while the idler modes are retained losslessly. We obtain a lower bound on the output state fidelity valid for any pure input state. We define number-diagonal signal (NDS) states to be input states whose density operator in the signal modes is diagonal in the multimode number basis. For such input states, we derive series formulas formore » the optimal error probability, the output state fidelity, and the Chernoff-type upper bounds on the error probability. For the special cases of quantum reading of a classical digital memory and target detection (for which the reflectivities are real valued), we show that for a given input signal photon probability distribution, the fidelity is minimized by the NDS states with that distribution and that for a given average total signal energy N{sub s}, the fidelity is minimized by any multimode Fock state with N{sub s} total signal photons. For reading of an ideal memory, it is shown that Fock state inputs minimize the Chernoff bound. For target detection under high-loss conditions, a no-go result showing the lack of appreciable quantum advantage over coherent state transmitters is derived. A comparison of the error probability performance for quantum reading of number state and two-mode squeezed vacuum state (or EPR state) transmitters relative to coherent state transmitters is presented for various values of the reflectances. While the nonclassical states in general perform better than the coherent state, the quantitative performance gains differ depending on the values of the reflectances. The experimental outlook for realizing nonclassical gains from number state transmitters with current technology at moderate to high values of the reflectances is argued to be good.« less
A tight Cramér-Rao bound for joint parameter estimation with a pure two-mode squeezed probe
NASA Astrophysics Data System (ADS)
Bradshaw, Mark; Assad, Syed M.; Lam, Ping Koy
2017-08-01
We calculate the Holevo Cramér-Rao bound for estimation of the displacement experienced by one mode of an two-mode squeezed vacuum state with squeezing r and find that it is equal to 4 exp (- 2 r). This equals the sum of the mean squared error obtained from a dual homodyne measurement, indicating that the bound is tight and that the dual homodyne measurement is optimal.
Free vibration of multiwall carbon nanotubes
NASA Astrophysics Data System (ADS)
Wang, C. Y.; Ru, C. Q.; Mioduchowski, A.
2005-06-01
A multiple-elastic shell model is applied to systematically study free vibration of multiwall carbon nanotubes (MWNTs). Using Flugge [Stresses in Shells (Springer, Berlin, 1960)] equations of elastic shells, vibrational frequencies and associated modes are calculated for MWNTs of innermost radii 5 and 0.65 nm, respectively. The emphasis is placed on the effect of interlayer van der Waals (vdW) interaction on free vibration of MWNTs. Our results show that the interlayer vdW interaction has a crucial effect on radial (R) modes of large-radius MWNTs (e.g., of the innermost radius 5 nm), but is less pronounced for R modes of small-radius MWNTs (e.g., of the innermost radius 0.65 nm), and usually negligible for torsional (T) and longitudinal (L) modes of MWNTs. This is attributed to the fact that the interlayer vdW interaction, characterized by a radius-independent vdW interaction coefficient, depends on radial deflections only, and is dominant only for large-radius MWNTs of lower radial rigidity but less pronounced for small-radius MWNTs of much higher radial rigidity. As a result, the R modes of large-radius MWNTs are typically collective motions of almost all nested tubes, and the R modes of small-radius MWNTs, as well as the T and L modes of MWNTs, are basically vibrations of individual tubes. In particular, an approximate single-shell model is suggested to replace the multiple-shell model in calculating the lowest frequency of R mode of thin MWNTs (defined by the innermost radius-to-thickness ratio not less than 4) with relative errors less than 10%. In addition, the simplified Flugge single equation is adopted to substitute the exact Flugge equations in determining the R-mode frequencies of MWNTs with relative errors less than 10%.
Evolutionary Model and Oscillation Frequencies for α Ursae Majoris: A Comparison with Observations
NASA Astrophysics Data System (ADS)
Guenther, D. B.; Demarque, P.; Buzasi, D.; Catanzarite, J.; Laher, R.; Conrow, T.; Kreidl, T.
2000-02-01
Inspired by the observations of low-amplitude oscillations of α Ursae Majoris A by Buzasi et al. using the WIRE satellite, a grid of stellar evolutionary tracks has been constructed to derive physically consistent interior models for the nearby red giant. The pulsation properties of these models were then calculated and compared with the observations. It is found that, by adopting the correct metallicity and for a normal helium abundance, only models in the mass range of 4.0-4.5 Msolar fall within the observational error box for α UMa A. This mass range is compatible, within the uncertainties, with the mass derived from the astrometric mass function. Analysis of the pulsation spectra of the models indicates that the observed α UMa oscillations can be most simply interpreted as radial (i.e., l=0) p-mode oscillations of low radial order n. The lowest frequencies observed by Buzasi et al. are compatible, within the observational errors, with model frequencies of radial orders n=0, 1, and 2 for models in the mass range of 4.0-4.5 Msolar. The higher frequencies observed can also be tentatively interpreted as higher n-valued radial p-modes, if we allow that some n-values are not presently observed. The theoretical l=1, 2, and 3 modes in the observed frequency range are g-modes with a mixed mode character, that is, with p-mode-like characteristics near the surface and g-mode-like characteristics in the interior. The calculated radial p-mode frequencies are nearly equally spaced, separated by 2-3 μHz. The nonradial modes are very densely packed throughout the observed frequency range and, even if excited to significant amplitudes at the surface, are unlikely to be resolved by the present observations.
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
The CHIC Model: A Global Model for Coupled Binary Data
ERIC Educational Resources Information Center
Wilderjans, Tom; Ceulemans, Eva; Van Mechelen, Iven
2008-01-01
Often problems result in the collection of coupled data, which consist of different N-way N-mode data blocks that have one or more modes in common. To reveal the structure underlying such data, an integrated modeling strategy, with a single set of parameters for the common mode(s), that is estimated based on the information in all data blocks, may…
FDDI network test adaptor error injection circuit
NASA Technical Reports Server (NTRS)
Eckenrode, Thomas (Inventor); Stauffer, David R. (Inventor); Stempski, Rebecca (Inventor)
1994-01-01
An apparatus for injecting errors into a FDDI token ring network is disclosed. The error injection scheme operates by fooling a FORMAC into thinking it sent a real frame of data. This is done by using two RAM buffers. The RAM buffer normally accessed by the RBC/DPC becomes a SHADOW RAM during error injection operation. A dummy frame is loaded into the shadow RAM in order to fool the FORMAC. This data is just like the data that would be used if sending a normal frame, with the restriction that it must be shorter than the error injection data. The other buffer, the error injection RAM, contains the error injection frame. The error injection data is sent out to the media by switching a multiplexor. When the FORMAC is done transmitting the data, the multiplexor is switched back to the normal mode. Thus, the FORMAC is unaware of what happened and the token ring remains operational.
Characterization of Mode 1 and Mode 2 delamination growth and thresholds in graphite/peek composites
NASA Technical Reports Server (NTRS)
Martin, Roderick H.; Murri, Gretchen B.
1988-01-01
Composite materials often fail by delamination. The onset and growth of delamination in AS4/PEEK, a tough thermoplastic matrix composite, was characterized for mode 1 and mode 2 loadings, using the Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) test specimens. Delamination growth per fatigue cycle, da/dN, was related to strain energy release rate, G, by means of a power law. However, the exponents of these power laws were too large for them to be adequately used as a life prediction tool. A small error in the estimated applied loads could lead to large errors in the delamination growth rates. Hence strain energy release rate thresholds, G sub th, below which no delamination would occur were also measured. Mode 1 and 2 threshold G values for no delamination growth were found by monitoring the number of cycles to delamination onset in the DCB and ENF specimens. The maximum applied G for which no delamination growth had occurred until at least 1,000,000 cycles was considered the threshold strain energy release rate. Comments are given on how testing effects, facial interference or delamination front damage, may invalidate the experimental determination of the constants in the expression.
Characterization of Mode I and Mode II delamination growth and thresholds in AS4/PEEK composites
NASA Technical Reports Server (NTRS)
Martin, Roderick H.; Murri, Gretchen Bostaph
1990-01-01
Composite materials often fail by delamination. The onset and growth of delamination in AS4/PEEK, a tough thermoplastic matrix composite, was characterized for mode 1 and mode 2 loadings, using the Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) test specimens. Delamination growth per fatigue cycle, da/dN, was related to strain energy release rate, G, by means of a power law. However, the exponents of these power laws were too large for them to be adequately used as a life prediction tool. A small error in the estimated applied loads could lead to large errors in the delamination growth rates. Hence strain energy release rate thresholds, G sub th, below which no delamination would occur were also measured. Mode 1 and 2 threshold G values for no delamination growth were found by monitoring the number of cycles to delamination onset in the DCB and ENF specimens. The maximum applied G for which no delamination growth had occurred until at least 1,000,000 cycles was considered the threshold strain energy release rate. Comments are given on how testing effects, facial interference or delamination front damage, may invalidate the experimental determination of the constants in the expression.
Regenbogen, Scott E; Greenberg, Caprice C; Studdert, David M; Lipsitz, Stuart R; Zinner, Michael J; Gawande, Atul A
2007-11-01
To identify the most prevalent patterns of technical errors in surgery, and evaluate commonly recommended interventions in light of these patterns. The majority of surgical adverse events involve technical errors, but little is known about the nature and causes of these events. We examined characteristics of technical errors and common contributing factors among closed surgical malpractice claims. Surgeon reviewers analyzed 444 randomly sampled surgical malpractice claims from four liability insurers. Among 258 claims in which injuries due to error were detected, 52% (n = 133) involved technical errors. These technical errors were further analyzed with a structured review instrument designed by qualitative content analysis. Forty-nine percent of the technical errors caused permanent disability; an additional 16% resulted in death. Two-thirds (65%) of the technical errors were linked to manual error, 9% to errors in judgment, and 26% to both manual and judgment error. A minority of technical errors involved advanced procedures requiring special training ("index operations"; 16%), surgeons inexperienced with the task (14%), or poorly supervised residents (9%). The majority involved experienced surgeons (73%), and occurred in routine, rather than index, operations (84%). Patient-related complexities-including emergencies, difficult or unexpected anatomy, and previous surgery-contributed to 61% of technical errors, and technology or systems failures contributed to 21%. Most technical errors occur in routine operations with experienced surgeons, under conditions of increased patient complexity or systems failure. Commonly recommended interventions, including restricting high-complexity operations to experienced surgeons, additional training for inexperienced surgeons, and stricter supervision of trainees, are likely to address only a minority of technical errors. Surgical safety research should instead focus on improving decision-making and performance in routine operations for complex patients and circumstances.
ERIC Educational Resources Information Center
Polo, Blanca J.
2013-01-01
Much research has been done in regards to student programming errors, online education and studio-based learning (SBL) in computer science education. This study furthers this area by bringing together this knowledge and applying it to proactively help students overcome impasses caused by common student programming errors. This project proposes a…
Preserving flying qubit in single-mode fiber with Knill Dynamical Decoupling (KDD)
NASA Astrophysics Data System (ADS)
Gupta, Manish; Navarro, Erik; Moulder, Todd; Mueller, Jason; Balouchi, Ashkan; Brown, Katherine; Lee, Hwang; Dowling, Jonathan
2015-03-01
The implementation of information-theoretic-crypto protocol is limited by decoherence caused by the birefringence of a single-mode fiber. We propose the Knill dynamical decoupling scheme, implemented using half-wave plates, to minimize decoherence and show that a fidelity greater than 96% can be achieved even in presence of rotation error.
Intelligent complementary sliding-mode control for LUSMS-based X-Y-theta motion control stage.
Lin, Faa-Jeng; Chen, Syuan-Yi; Shyu, Kuo-Kai; Liu, Yen-Hung
2010-07-01
An intelligent complementary sliding-mode control (ICSMC) system using a recurrent wavelet-based Elman neural network (RWENN) estimator is proposed in this study to control the mover position of a linear ultrasonic motors (LUSMs)-based X-Y-theta motion control stage for the tracking of various contours. By the addition of a complementary generalized error transformation, the complementary sliding-mode control (CSMC) can efficiently reduce the guaranteed ultimate bound of the tracking error by half compared with the slidingmode control (SMC) while using the saturation function. To estimate a lumped uncertainty on-line and replace the hitting control of the CSMC directly, the RWENN estimator is adopted in the proposed ICSMC system. In the RWENN, each hidden neuron employs a different wavelet function as an activation function to improve both the convergent precision and the convergent time compared with the conventional Elman neural network (ENN). The estimation laws of the RWENN are derived using the Lyapunov stability theorem to train the network parameters on-line. A robust compensator is also proposed to confront the uncertainties including approximation error, optimal parameter vectors, and higher-order terms in Taylor series. Finally, some experimental results of various contours tracking show that the tracking performance of the ICSMC system is significantly improved compared with the SMC and CSMC systems.
Halim, Dunant; Cheng, Li; Su, Zhongqing
2011-04-01
The work proposed an optimization approach for structural sensor placement to improve the performance of vibro-acoustic virtual sensor for active noise control applications. The vibro-acoustic virtual sensor was designed to estimate the interior sound pressure of an acoustic-structural coupled enclosure using structural sensors. A spectral-spatial performance metric was proposed, which was used to quantify the averaged structural sensor output energy of a vibro-acoustic system excited by a spatially varying point source. It was shown that (i) the overall virtual sensing error energy was contributed additively by the modal virtual sensing error and the measurement noise energy; (ii) each of the modal virtual sensing error system was contributed by both the modal observability levels for the structural sensing and the target acoustic virtual sensing; and further (iii) the strength of each modal observability level was influenced by the modal coupling and resonance frequencies of the associated uncoupled structural/cavity modes. An optimal design of structural sensor placement was proposed to achieve sufficiently high modal observability levels for certain important panel- and cavity-controlled modes. Numerical analysis on a panel-cavity system demonstrated the importance of structural sensor placement on virtual sensing and active noise control performance, particularly for cavity-controlled modes.
Evaluation of the EGNOS service for topographic profiling in field geosciences
NASA Astrophysics Data System (ADS)
Kromuszczyńska, Olga; Mège, Daniel; Castaldo, Luigi; Gurgurewicz, Joanna; Makowska, Magdalena; Dębniak, Krzysztof; Jelínek, Róbert
2016-09-01
Consumer grade Global Positioning System (GPS) receivers are commonly used as a tool for data collection in many fields, including geosciences. One of the methods for improving the GPS signal is provided by the Wide Area Differential GPS (WADGPS), which uses geostationary satellites to correct errors affecting the signal in real time. This study presents results of three experiments aiming at determining whether the precision of field measurements made by such a receiver (Garmin GPSMAP 62s) operating in either the non-differential and the WADGPS differential mode is suitable for characterizing geomorphological objects or landforms. It assumes in a typical field work situation, when time cannot be devoted in the field to long periods of stationary GPS measurements and the precision of topographic profile is at least as important as, if not more than, positioning of individual points. The results show that with maintaining some rules, the expected precision may meet the nominal precision. The repeatability (coherence) of topographic profiles conducted at low speed (0.5 m s- 1) in mountain terrain is good, and vertical precision is improved in the WADGPS mode. Horizontal precision is equivalent in both modes. The GPS receiver should be operating at least 30 min prior to measuring and should not be turned off between measurements that the user like to compare. If the GPS receiver needs to be reset between profiles to be compared, the measurement precision is higher in the non-differential GPS mode. Following these rules may result in improvement of measurement quality by 20% to 80%.
Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.
Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente
2014-07-15
Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Remediating Common Math Errors.
ERIC Educational Resources Information Center
Wagner, Rudolph F.
1981-01-01
Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)
Effect of Bypass Capacitor in Common-mode Noise Reduction Technique for Automobile PCB
NASA Astrophysics Data System (ADS)
Uno, Takanori; Ichikawa, Kouji; Mabuchi, Yuichi; Nakamura, Atushi
In this letter, we studied the use of common mode noise reduction technique for in-vehicle electronic equipment, each comprising large-scale integrated circuit (LSI), printed circuit board (PCB), wiring harnesses, and ground plane. We have improved the model circuit of the common mode noise that flows to the wire harness to add the effect of by-pass capacitors located near an LSI.
Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.
Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan
2015-01-01
Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars.
Han, Ming; Wang, Anbo
2006-05-01
Theoretical and experimental results have shown that mode power distribution (MPD) variations could significantly vary the phase of spectral fringes from multimode fiber extrinsic Fabry-Perot interferometric (MMF-EFPI) sensor systems, owing to the fact that different modes introduce different extra phase shifts resulting from the coupling of modes reflected at the second surface to the lead-in fiber end. This dependence of fringe pattern on MPD could cause measurement errors in signal demodulation methods of white-light MMF-EFPI sensors that implement the phase information of the fringes.
Exploring Common Misconceptions and Errors about Fractions among College Students in Saudi Arabia
ERIC Educational Resources Information Center
Alghazo, Yazan M.; Alghazo, Runna
2017-01-01
The purpose of this study was to investigate what common errors and misconceptions about fractions exist among Saudi Arabian college students. Moreover, the study aimed at investigating the possible explanations for the existence of such misconceptions among students. A researcher developed mathematical test aimed at identifying common errors…
Time synchronization of new-generation BDS satellites using inter-satellite link measurements
NASA Astrophysics Data System (ADS)
Pan, Junyang; Hu, Xiaogong; Zhou, Shanshi; Tang, Chengpan; Guo, Rui; Zhu, Lingfeng; Tang, Guifeng; Hu, Guangming
2018-01-01
Autonomous satellite navigation is based on the ability of a Global Navigation Satellite System (GNSS), such as Beidou, to estimate orbits and clock parameters onboard satellites using Inter-Satellite Link (ISL) measurements instead of tracking data from a ground monitoring network. This paper focuses on the time synchronization of new-generation Beidou Navigation Satellite System (BDS) satellites equipped with an ISL payload. Two modes of Ka-band ISL measurements, Time Division Multiple Access (TDMA) mode and the continuous link mode, were used onboard these BDS satellites. Using a mathematical formulation for each measurement mode along with a derivation of the satellite clock offsets, geometric ranges from the dual one-way measurements were introduced. Then, pseudoranges and clock offsets were evaluated for the new-generation BDS satellites. The evaluation shows that the ranging accuracies of TDMA ISL and the continuous link are approximately 4 cm and 1 cm (root mean square, RMS), respectively. Both lead to ISL clock offset residuals of less than 0.3 ns (RMS). For further validation, time synchronization between these satellites to a ground control station keeping the systematic time in BDT was conducted using L-band Two-way Satellite Time Frequency Transfer (TWSTFT). System errors in the ISL measurements were calibrated by comparing the derived clock offsets with the TWSTFT. The standard deviations of the estimated ISL system errors are less than 0.3 ns, and the calibrated ISL clock parameters are consistent with that of the L-band TWSTFT. For the regional BDS network, the addition of ISL measurements for medium orbit (MEO) BDS satellites increased the clock tracking coverage by more than 40% for each orbital revolution. As a result, the clock predicting error for the satellite M1S was improved from 3.59 to 0.86 ns (RMS), and the predicting error of the satellite M2S was improved from 1.94 to 0.57 ns (RMS), which is a significant improvement by a factor of 3-4.
Low Power Operation of Temperature-Modulated Metal Oxide Semiconductor Gas Sensors.
Burgués, Javier; Marco, Santiago
2018-01-25
Mobile applications based on gas sensing present new opportunities for low-cost air quality monitoring, safety, and healthcare. Metal oxide semiconductor (MOX) gas sensors represent the most prominent technology for integration into portable devices, such as smartphones and wearables. Traditionally, MOX sensors have been continuously powered to increase the stability of the sensing layer. However, continuous power is not feasible in many battery-operated applications due to power consumption limitations or the intended intermittent device operation. This work benchmarks two low-power, duty-cycling, and on-demand modes against the continuous power one. The duty-cycling mode periodically turns the sensors on and off and represents a trade-off between power consumption and stability. On-demand operation achieves the lowest power consumption by powering the sensors only while taking a measurement. Twelve thermally modulated SB-500-12 (FIS Inc. Jacksonville, FL, USA) sensors were exposed to low concentrations of carbon monoxide (0-9 ppm) with environmental conditions, such as ambient humidity (15-75% relative humidity) and temperature (21-27 °C), varying within the indicated ranges. Partial Least Squares (PLS) models were built using calibration data, and the prediction error in external validation samples was evaluated during the two weeks following calibration. We found that on-demand operation produced a deformation of the sensor conductance patterns, which led to an increase in the prediction error by almost a factor of 5 as compared to continuous operation (2.2 versus 0.45 ppm). Applying a 10% duty-cycling operation of 10-min periods reduced this prediction error to a factor of 2 (0.9 versus 0.45 ppm). The proposed duty-cycling powering scheme saved up to 90% energy as compared to the continuous operating mode. This low-power mode may be advantageous for applications that do not require continuous and periodic measurements, and which can tolerate slightly higher prediction errors.
Corlett, P R; Canavan, S V; Nahum, L; Appah, F; Morgan, P T
2014-01-01
Dreams might represent a window on altered states of consciousness with relevance to psychotic experiences, where reality monitoring is impaired. We examined reality monitoring in healthy, non-psychotic individuals with varying degrees of dream awareness using a task designed to assess confabulatory memory errors - a confusion regarding reality whereby information from the past feels falsely familiar and does not constrain current perception appropriately. Confabulatory errors are common following damage to the ventromedial prefrontal cortex (vmPFC). Ventromedial function has previously been implicated in dreaming and dream awareness. In a hospital research setting, physically and mentally healthy individuals with high (n = 18) and low (n = 13) self-reported dream awareness completed a computerised cognitive task that involved reality monitoring based on familiarity across a series of task runs. Signal detection theory analysis revealed a more liberal acceptance bias in those with high dream awareness, consistent with the notion of overlap in the perception of dreams, imagination and reality. We discuss the implications of these results for models of reality monitoring and psychosis with a particular focus on the role of vmPFC in default-mode brain function, model-based reinforcement learning and the phenomenology of dreaming and waking consciousness.
An IMM-Aided ZUPT Methodology for an INS/DVL Integrated Navigation System.
Yao, Yiqing; Xu, Xiaosu; Xu, Xiang
2017-09-05
Inertial navigation system (INS)/Doppler velocity log (DVL) integration is the most common navigation solution for underwater vehicles. Due to the complex underwater environment, the velocity information provided by DVL always contains some errors. To improve navigation accuracy, zero velocity update (ZUPT) technology is considered, which is an effective algorithm for land vehicles to mitigate the navigation error during the pure INS mode. However, in contrast to ground vehicles, the ZUPT solution cannot be used directly for underwater vehicles because of the existence of the water current. In order to leverage the strengths of the ZUPT method and the INS/DVL solution, an interactive multiple model (IMM)-aided ZUPT methodology for the INS/DVL-integrated underwater navigation system is proposed. Both the INS/DVL and INS/ZUPT models are constructed and operated in parallel, with weights calculated according to their innovations and innovation covariance matrices. Simulations are conducted to evaluate the proposed algorithm. The results indicate that the IMM-aided ZUPT solution outperforms both the INS/DVL solution and the INS/ZUPT solution in the underwater environment, which can properly distinguish between the ZUPT and non-ZUPT conditions. In addition, during DVL outage, the effectiveness of the proposed algorithm is also verified.
InSAR time series analysis of ALOS-2 ScanSAR data and its implications for NISAR
NASA Astrophysics Data System (ADS)
Liang, C.; Liu, Z.; Fielding, E. J.; Huang, M. H.; Burgmann, R.
2017-12-01
The JAXA's ALOS-2 mission was launched on May 24, 2014. It operates at L-band and can acquire data in multiple modes. ScanSAR is the main operational mode and has a 350 km swath, somewhat larger than the 250 km swath of the SweepSAR mode planned for the NASA-ISRO SAR (NISAR) mission. ALOS-2 has been acquiring a wealth of L-band InSAR data. These data are of particular value in areas of dense vegetation and high relief. The InSAR technical development for ALOS-2 also enables the preparation for the upcoming NISAR mission. We have been developing advanced InSAR processing techniques for ALOS-2 over the past two years. Here, we report the important issues for doing InSAR time series analysis using ALOS-2 ScanSAR data. First, we present ionospheric correction techniques for both regular ScanSAR InSAR and MAI (multiple aperture InSAR) ScanSAR InSAR. We demonstrate the large-scale ionospheric signals in the ScanSAR interferograms. They can be well mitigated by the correction techniques. Second, based on our technical development of burst-by-burst InSAR processing for ALOS-2 ScanSAR data, we find that the azimuth Frequency Modulation (FM) rate error is an important issue not only for MAI, but also for regular InSAR time series analysis. We identify phase errors caused by azimuth FM rate errors during the focusing process of ALOS-2 product. The consequence is mostly a range ramp in the InSAR time series result. This error exists in all of the time series results we have processed. We present the correction techniques for this error following a theoretical analysis. After corrections, we present high quality ALOS-2 ScanSAR InSAR time series results in a number of areas. The development for ALOS-2 can provide important implications for NISAR mission. For example, we find that in most cases the relative azimuth shift caused by ionosphere can be as large as 4 m in a large area imaged by ScanSAR. This azimuth shift is half of the 8 m azimuth resolution of the SweepSAR mode planned for NISAR, which implies that a good coregistration strategy for NISAR's SweepSAR mode is geometrical coregistration followed by MAI or spectral diversity analysis. Besides, our development also provides implications for the processing and system parameter requirements of NISAR, such as the accuracy requirement of azimuth FM rate and range timing.
Magnetic control of magnetohydrodynamic instabilities in tokamaks
Strait, Edward J.
2014-11-24
Externally applied, non-axisymmetric magnetic fields form the basis of several relatively simple and direct methods to control magnetohydrodynamic (MHD) instabilities in a tokamak, and most present and planned tokamaks now include a set of non-axisymmetric control coils for application of fields with low toroidal mode numbers. Non-axisymmetric applied fields are routinely used to compensate small asymmetries ( δB/B ~ 10 -3 to 10 -4) of the nominally axisymmetric field, which otherwise can lead to instabilities through braking of plasma rotation and through direct stimulus of tearing modes or kink modes. This compensation may be feedback-controlled, based on the magnetic responsemore » of the plasma to the external fields. Non-axisymmetric fields are used for direct magnetic stabilization of the resistive wall mode — a kink instability with a growth rate slow enough that feedback control is practical. Saturated magnetic islands are also manipulated directly with non-axisymmetric fields, in order to unlock them from the wall and spin them to aid stabilization, or position them for suppression by localized current drive. Several recent scientific advances form the foundation of these developments in the control of instabilities. Most fundamental is the understanding that stable kink modes play a crucial role in the coupling of non-axisymmetric fields to the plasma, determining which field configurations couple most strongly, how the coupling depends on plasma conditions, and whether external asymmetries are amplified by the plasma. A major advance for the physics of high-beta plasmas ( β = plasma pressure/magnetic field pressure) has been the understanding that drift-kinetic resonances can stabilize the resistive wall mode at pressures well above the ideal-MHD stability limit, but also that such discharges can be very sensitive to external asymmetries. The common physics of stable kink modes has brought significant unification to the topics of static error fields at low beta and resistive wall modes at high beta. Furthermore, these and other scientific advances, and their application to control of MHD instabilities, will be reviewed with emphasis on the most recent results and their applicability to ITER.« less
Concurrent remote entanglement with quantum error correction against photon losses
NASA Astrophysics Data System (ADS)
Roy, Ananda; Stone, A. Douglas; Jiang, Liang
2016-09-01
Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.
Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment
ERIC Educational Resources Information Center
Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc
2014-01-01
In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three…
Radiation Tests on 2Gb NAND Flash Memories
NASA Technical Reports Server (NTRS)
Nguyen, Duc N.; Guertin, Steven M.; Patterson, J. D.
2006-01-01
We report on SEE and TID tests of highly scaled Samsung 2Gbits flash memories. Both in-situ and biased interval irradiations were used to characterize the response of the total accumulated dose failures. The radiation-induced failures can be categorized as followings: single event upset (SEU) read errors in biased and unbiased modes, write errors, and single-event-functional-interrupt (SEFI) failures.
Waffle mode error in the AEOS adaptive optics point-spread function
NASA Astrophysics Data System (ADS)
Makidon, Russell B.; Sivaramakrishnan, Anand; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Graham, James R.
2003-02-01
Adaptive optics (AO) systems have improved astronomical imaging capabilities significantly over the last decade, and have the potential to revolutionize the kinds of science done with 4-5m class ground-based telescopes. However, provided sufficient detailed study and analysis, existing AO systems can be improved beyond their original specified error budgets. Indeed, modeling AO systems has been a major activity in the past decade: sources of noise in the atmosphere and the wavefront sensing WFS) control loop have received a great deal of attention, and many detailed and sophisticated control-theoretic and numerical models predicting AO performance are already in existence. However, in terms of AO system performance improvements, wavefront reconstruction (WFR) and wavefront calibration techniques have commanded relatively little attention. We elucidate the nature of some of these reconstruction problems, and demonstrate their existence in data from the AEOS AO system. We simulate the AO correction of AEOS in the I-band, and show that the magnitude of the `waffle mode' error in the AEOS reconstructor is considerably larger than expected. We suggest ways of reducing the magnitude of this error, and, in doing so, open up ways of understanding how wavefront reconstruction might handle bad actuators and partially-illuminated WFS subapertures.
The Language of Scholarship: How to Rapidly Locate and Avoid Common APA Errors.
Freysteinson, Wyona M; Krepper, Rebecca; Mellott, Susan
2015-10-01
This article is relevant for nurses and nursing students who are writing scholarly documents for work, school, or publication and who have a basic understanding of American Psychological Association (APA) style. Common APA errors on the reference list and in citations within the text are reviewed. Methods to quickly find and reduce those errors are shared. Copyright 2015, SLACK Incorporated.
ERIC Educational Resources Information Center
Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki
2013-01-01
In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…
A Framework for Modeling Human-Machine Interactions
NASA Technical Reports Server (NTRS)
Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)
1996-01-01
Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.
A study of attitude control concepts for precision-pointing non-rigid spacecraft
NASA Technical Reports Server (NTRS)
Likins, P. W.
1975-01-01
Attitude control concepts for use onboard structurally nonrigid spacecraft that must be pointed with great precision are examined. The task of determining the eigenproperties of a system of linear time-invariant equations (in terms of hybrid coordinates) representing the attitude motion of a flexible spacecraft is discussed. Literal characteristics are developed for the associated eigenvalues and eigenvectors of the system. A method is presented for determining the poles and zeros of the transfer function describing the attitude dynamics of a flexible spacecraft characterized by hybrid coordinate equations. Alterations are made to linear regulator and observer theory to accommodate modeling errors. The results show that a model error vector, which evolves from an error system, can be added to a reduced system model, estimated by an observer, and used by the control law to render the system less sensitive to uncertain magnitudes and phase relations of truncated modes and external disturbance effects. A hybrid coordinate formulation using the provided assumed mode shapes, rather than incorporating the usual finite element approach is provided.
NASA Astrophysics Data System (ADS)
Gao, Cheng-Yan; Wang, Guan-Yu; Zhang, Hao; Deng, Fu-Guo
2017-01-01
We present a self-error-correction spatial-polarization hyperentanglement distribution scheme for N-photon systems in a hyperentangled Greenberger-Horne-Zeilinger state over arbitrary collective-noise channels. In our scheme, the errors of spatial entanglement can be first averted by encoding the spatial-polarization hyperentanglement into the time-bin entanglement with identical polarization and defined spatial modes before it is transmitted over the fiber channels. After transmission over the noisy channels, the polarization errors introduced by the depolarizing noise can be corrected resorting to the time-bin entanglement. Finally, the parties in quantum communication can in principle share maximally hyperentangled states with a success probability of 100%.
Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error
Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee
2017-01-01
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146
Ozone Profile Retrievals from the OMPS on Suomi NPP
NASA Astrophysics Data System (ADS)
Bak, J.; Liu, X.; Kim, J. H.; Haffner, D. P.; Chance, K.; Yang, K.; Sun, K.; Gonzalez Abad, G.
2017-12-01
We verify and correct the Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) L1B v2.0 data with the aim of producing accurate ozone profile retrievals using an optimal estimation based inversion method in the 302.5-340 nm fitting. The evaluation of available slit functions demonstrates that preflight-measured slit functions well represent OMPS measurements compared to derived Gaussian slit functions. Our OMPS fitting residuals contain significant wavelength and cross-track dependent biases, and thereby serious cross-track striping errors are found in preliminary retrievals, especially in the troposphere. To eliminate the systematic component of the fitting residuals, we apply "soft calibration" to OMPS radiances. With the soft calibration the amplitude of fitting residuals decreases from 1 % to 0.2 % over low/mid latitudes, and thereby the consistency of tropospheric ozone retrievals between OMPS and Ozone Monitoring Instrument (OMI) are substantially improved. A common mode correction is implemented for additional radiometric calibration, which improves retrievals especially at high latitudes where the amplitude of fitting residuals decreases by a factor of 2. We estimate the floor noise error of OMPS measurements from standard deviations of the fitting residuals. The derived error in the Huggins band ( 0.1 %) is 2 times smaller than OMI floor noise error and 2 times larger than OMPS L1B measurement error. The OMPS floor noise errors better constrain our retrievals for maximizing measurement information and stabilizing our fitting residuals. The final precision of the fitting residuals is less than 0.1 % in the low/mid latitude, with 1 degrees of freedom for signal for the tropospheric ozone, so that we meet the general requirements for successful tropospheric ozone retrievals. To assess if the quality of OMPS ozone retrievals could be acceptable for scientific use, we will characterize OMPS ozone profile retrievals, present error analysis, and validate retrievals using a reference dataset. The useful information on the vertical distribution of ozone is limited below 40 km only from OMPS NM measurements due to the absence of Hartley ozone wavelength. This shortcoming will be improved with the joint ozone profile retrieval using Nadir Profiler (NP) measurements covering the 250 to 310 nm range.
NASA Astrophysics Data System (ADS)
Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.
2017-11-01
We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.
Bandpass mismatch error for satellite CMB experiments I: estimating the spurious signal
NASA Astrophysics Data System (ADS)
Thuong Hoang, Duc; Patanchon, Guillaume; Bucher, Martin; Matsumura, Tomotake; Banerji, Ranajoy; Ishino, Hirokazu; Hazumi, Masashi; Delabrouille, Jacques
2017-12-01
Future Cosmic Microwave Background (CMB) satellite missions aim to use the B mode polarization to measure the tensor-to-scalar ratio r with a sensitivity σr lesssim 10-3. Achieving this goal will not only require sufficient detector array sensitivity but also unprecedented control of all systematic errors inherent in CMB polarization measurements. Since polarization measurements derive from differences between observations at different times and from different sensors, detector response mismatches introduce leakages from intensity to polarization and thus lead to a spurious B mode signal. Because the expected primordial B mode polarization signal is dwarfed by the known unpolarized intensity signal, such leakages could contribute substantially to the final error budget for measuring r. Using simulations we estimate the magnitude and angular spectrum of the spurious B mode signal resulting from bandpass mismatch between different detectors. It is assumed here that the detectors are calibrated, for example using the CMB dipole, so that their sensitivity to the primordial CMB signal has been perfectly matched. Consequently the mismatch in the frequency bandpass shape between detectors introduces differences in the relative calibration of galactic emission components. We simulate this effect using a range of scanning patterns being considered for future satellite missions. We find that the spurious contribution to r from the reionization bump on large angular scales (l < 10) is ≈ 10-3 assuming large detector arrays and 20 percent of the sky masked. We show how the amplitude of the leakage depends on the nonuniformity of the angular coverage in each pixel that results from the scan pattern.
Antonova, A A; Absatova, K A; Korneev, A A; Kurgansky, A V
2015-01-01
The production of drawing movements was studied in 29 right-handed children of 9-to-11 years old. The movements were the sequences of horizontal and vertical linear stokes conjoined at right angle (open polygonal chains) referred to throughout the paper as trajectories. The length of a trajectory varied from 4 to 6. The trajectories were presented visually to a subject in static (linedrawing) and dynamic (moving cursor that leaves no trace) modes. The subjects were asked to draw (copy) a trajectory in response to delayed go-signal (short click) as fast as possible without lifting the pen. The production latency time, the average movement duration along a trajectory segment, and overall number of errors committed by a subject during trajectory production were analyzed. A comparison of children's data with similar data in adults (16 subjects) shows the following. First, a substantial reduction in error rate is observed in the age range between 9 and 11 years old for both static and dynamic modes of trajectory presentation, with children of 11 still committing more error than adults. Second, the averaged movement duration shortens with age while the latency time tends to increase. Third, unlike the adults, the children of 9-11 do not show any difference in latency time between static and dynamic modes of visual presentation of trajectories. The difference in trajectory production between adult and children is attributed to the predominant involvement of on-line programming in children and pre-programming in adults.
Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna
2016-08-24
Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate access to guidelines or unclear organisational routines". Medication errors regarded as malpractice in Sweden were of the same character as medication errors worldwide. A complex interplay between individual and system factors often contributed to the errors.
First order error corrections in common introductory physics experiments
NASA Astrophysics Data System (ADS)
Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team
As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.
An alternative to Guyan reduction of finite-element models
NASA Technical Reports Server (NTRS)
Lin, Jiguan Gene
1988-01-01
Structural modeling is a key part of structural system identification for large space structures. Finite-element structural models are commonly used in practice because of their general applicability and availability. The initial models generated by using a standard computer program such as NASTRAN, ANSYS, SUPERB, STARDYNE, STRUDL, etc., generally contain tens of thousands of degrees of freedom. The models must be reduced for purposes of identification. Not only does the magnitude of the identification effort grow exponentially as a function of the number of degrees of freedom, but numerical procedures may also break down because of accumulated round-off errors. Guyan reduction is usually applied after a static condensation. Misapplication of Guyan reduction can lead to serious modeling errors. It is quite unfortunate and disappointing, since the accuracy of the original detailed finite-element model one tries very hard to achieve is lost by the reduction. First, why and how Guyan reduction always causes loss of accuracy is examined. An alternative approach is then introduced. The alternative can be thought of as an improvement of Guyan reduction, the Rayleigh-Ritz method, and in particular the recent algorithm of Wilson, Yuan, and Dickens. Unlike Guyan reduction, the use of the alternative does not need any special insight, experience, or skill for partitioning the structural degrees of freedom. In addition to model condensation, this alternative approach can also be used for predicting analytically, quickly, and economically, what are those structural modes that are excitable by a force actuator at a given trial location. That is, in the excitation of the structural modes for identification, it can be used for guiding the placement of the force actuators.
Multi-frequency EIT system with radially symmetric architecture: KHU Mark1.
Oh, Tong In; Woo, Eung Je; Holder, David
2007-07-01
We describe the development of a multi-frequency electrical impedance tomography (EIT) system (KHU Mark1) with a single balanced current source and multiple voltmeters. It was primarily designed for imaging brain function with a flexible strategy for addressing electrodes and a frequency range from 10 Hz-500 kHz. The maximal number of voltmeters is 64, and all of them can simultaneously acquire and demodulate voltage signals. Each voltmeter measures a differential voltage between a pair of electrodes. All voltmeters are configured in a radially symmetric architecture in order to optimize the routing of wires and minimize cross-talk. We adopted several techniques from existing EIT systems including digital waveform generation, a Howland current generator with a generalized impedance converter (GIC), digital phase-sensitive demodulation and tri-axial cables. New features of the KHU Mark1 system include multiple GIC circuits to maximize the output impedance of the current source at multiple frequencies. The voltmeter employs contact impedance measurements, data overflow detection, spike noise rejection, automatic gain control and programmable data averaging. The KHU Mark1 system measures both in-phase and quadrature components of trans-impedances. By using a script file describing an operating mode, the system setup can be easily changed. The performance of the developed multi-frequency EIT system was evaluated in terms of a common-mode rejection ratio, signal-to-noise ratio, linearity error and reciprocity error. Time-difference and frequency-difference images of a saline phantom with a banana object are presented showing a frequency-dependent complex conductivity of the banana. Future design of a more innovative system is suggested including miniaturization and wireless techniques.
Alternate methods for FAAT S-curve generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufman, A.M.
The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less
Reduced-Rank Array Modes of the California Current Observing System
NASA Astrophysics Data System (ADS)
Moore, Andrew M.; Arango, Hernan G.; Edwards, Christopher A.
2018-01-01
The information content of the ocean observing array spanning the U.S. west coast is explored using the reduced-rank array modes (RAMs) derived from a four-dimensional variational (4D-Var) data assimilation system covering a period of three decades. RAMs are an extension of the original formulation of array modes introduced by Bennett (1985) but in the reduced model state-space explored by the 4D-Var system, and reveal the extent to which this space is activated by the observations. The projection of the RAMs onto the empirical orthogonal functions (EOFs) of the 4D-Var background error correlation matrix provides a quantitative measure of the effectiveness of the measurements in observing the circulation. It is found that much of the space spanned by the background error covariance is unconstrained by the present ocean observing system. The RAM spectrum is also used to introduce a new criterion to prevent 4D-Var from overfitting the model to the observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Haotian; Duan, Fajie; Wu, Guoxiu
2014-11-15
The blade tip clearance is a parameter of great importance to guarantee the efficiency and safety of the turbine engines. In this article, a laser ranging system designed for blade tip clearance measurement is presented. Multi-mode fiber is utilized for optical transmission to guarantee that enough optical power is received by the sensor probe. The model of the tiny sensor probe is presented. The error brought by the optical path difference of different modes of the fiber is estimated and the length of the fiber is limited to reduce this error. The measurement range in which the optical power receivedmore » by the probe remains essentially unchanged is analyzed. Calibration experiments and dynamic experiments are conducted. The results of the calibration experiments indicate that the resolution of the system is about 0.02 mm and the range of the system is about 9 mm.« less
Phonons in two-dimensional soft colloidal crystals.
Chen, Ke; Still, Tim; Schoenholz, Samuel; Aptowicz, Kevin B; Schindler, Michael; Maggs, A C; Liu, Andrea J; Yodh, A G
2013-08-01
The vibrational modes of pristine and polycrystalline monolayer colloidal crystals composed of thermosensitive microgel particles are measured using video microscopy and covariance matrix analysis. At low frequencies, the Debye relation for two-dimensional harmonic crystals is observed in both crystal types; at higher frequencies, evidence for van Hove singularities in the phonon density of states is significantly smeared out by experimental noise and measurement statistics. The effects of these errors are analyzed using numerical simulations. We introduce methods to correct for these limitations, which can be applied to disordered systems as well as crystalline ones, and we show that application of the error correction procedure to the experimental data leads to more pronounced van Hove singularities in the pristine crystal. Finally, quasilocalized low-frequency modes in polycrystalline two-dimensional colloidal crystals are identified and demonstrated to correlate with structural defects such as dislocations, suggesting that quasilocalized low-frequency phonon modes may be used to identify local regions vulnerable to rearrangements in crystalline as well as amorphous solids.
Convergence analysis of sliding mode trajectories in multi-objective neural networks learning.
Costa, Marcelo Azevedo; Braga, Antonio Padua; de Menezes, Benjamin Rodrigues
2012-09-01
The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function. Copyright © 2012 Elsevier Ltd. All rights reserved.
He, Jingjing; Zhou, Yibin; Guan, Xuefei; Zhang, Wei; Zhang, Weifang; Liu, Yongming
2016-08-16
Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil) are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided.
Spelling in adolescents with dyslexia: errors and modes of assessment.
Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc
2014-01-01
In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia. © Hammill Institute on Disabilities 2012.
NASA Astrophysics Data System (ADS)
Hiramatsu, Takashi; Komatsu, Eiichiro; Hazumi, Masashi; Sasaki, Misao
2018-06-01
Given observations of the B -mode polarization power spectrum of the cosmic microwave background (CMB), we can reconstruct power spectra of primordial tensor modes from the early Universe without assuming their functional form such as a power-law spectrum. The shape of the reconstructed spectra can then be used to probe the origin of tensor modes in a model-independent manner. We use the Fisher matrix to calculate the covariance matrix of tensor power spectra reconstructed in bins. We find that the power spectra are best reconstructed at wave numbers in the vicinity of k ≈6 ×10-4 and 5 ×10-3 Mpc-1 , which correspond to the "reionization bump" at ℓ≲6 and "recombination bump" at ℓ≈80 of the CMB B -mode power spectrum, respectively. The error bar between these two wave numbers is larger because of the lack of the signal between the reionization and recombination bumps. The error bars increase sharply toward smaller (larger) wave numbers because of the cosmic variance (CMB lensing and instrumental noise). To demonstrate the utility of the reconstructed power spectra, we investigate whether we can distinguish between various sources of tensor modes including those from the vacuum metric fluctuation and SU(2) gauge fields during single-field slow-roll inflation, open inflation, and massive gravity inflation. The results depend on the model parameters, but we find that future CMB experiments are sensitive to differences in these models. We make our calculation tool available online.
Interface evaluation for soft robotic manipulators
NASA Astrophysics Data System (ADS)
Moore, Kristin S.; Rodes, William M.; Csencsits, Matthew A.; Kwoka, Martha J.; Gomer, Joshua A.; Pagano, Christopher C.
2006-05-01
The results of two usability experiments evaluating an interface for the operation of OctArm, a biologically inspired robotic arm modeled after an octopus tentacle, are reported. Due to the many degrees-of-freedom (DOF) for the operator to control, such 'continuum' robotic limbs provide unique challenges for human operators because they do not map intuitively. Two modes have been developed to control the arm and reduce the DOF under the explicit direction of the operator. In coupled velocity (CV) mode, a joystick controls changes in arm curvature. In end-effector (EE) mode, a joystick controls the arm by moving the position of an endpoint along a straight line. In Experiment 1, participants used the two modes to grasp objects placed at different locations in a virtual reality modeling language (VRML). Objective measures of performance and subjective preferences were recorded. Results revealed lower grasp times and a subjective preference for the CV mode. Recommendations for improving the interface included providing additional feedback and implementation of an error recovery function. In Experiment 2, only the CV mode was tested with improved training of participants and several changes to the interface. The error recovery function was implemented, allowing participants to reverse through previously attained positions. The mean time to complete the trials in the second usability test was reduced by more than 4 minutes compared with the first usability test, confirming the interface changes improved performance. The results of these tests will be incorporated into future versions of the arm and improve future usability tests.
Generation of a crowned pinion tooth surface by a surface of revolution
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Zhang, J.; Handschuh, R. F.
1988-01-01
A method of generating crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc sec for the numerical examples). Tooth contact analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and determine the bearing contact.
Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif
2017-01-01
Introduction Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%–38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. Methods We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Results Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). Conclusion A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive. PMID:28874948
Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif
2017-08-01
Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%-38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive.
Relationship Between Locked Modes and Disruptions in the DIII-D Tokamak
NASA Astrophysics Data System (ADS)
Sweeney, Ryan
This thesis is organized into three body chapters: (1) the first use of naturally rotating tearing modes to diagnose intrinsic error fields is presented with experimental results from the EXTRAP T2R reversed field pinch, (2) a large scale study of locked modes (LMs) with rotating precursors in the DIII-D tokamak is reported, and (3) an in depth study of LM induced thermal collapses on a few DIII-D discharges is presented. The amplitude of naturally rotating tearing modes (TMs) in EXTRAP T2R is modulated in the presence of a resonant field (given by the superposition of the resonant intrinsic error field, and, possibly, an applied, resonant magnetic perturbation (RMP)). By scanning the amplitude and phase of the RMP and observing the phase-dependent amplitude modulation of the resonant, naturally rotating TM, the corresponding resonant error field is diagnosed. A rotating TM can decelerate and lock in the laboratory frame, under the effect of an electromagnetic torque due to eddy currents induced in the wall. These locked modes often lead to a disruption, where energy and particles are lost from the equilibrium configuration on a timescale of a few to tens of milliseconds in the DIII-D tokamak. In fusion reactors, disruptions pose a problem for the longevity of the reactor. Thus, learning to predict and avoid them is important. A database was developed consisting of ˜ 2000 DIII-D discharges exhibiting TMs that lock. The database was used to study the evolution, the nonlinear effects on equilibria, and the disruptivity of locked and quasi-stationary modes with poloidal and toroidal mode numbers m = 2 and n = 1 at DIII-D. The analysis of 22,500 discharges shows that more than 18% of disruptions present signs of locked or quasi-stationary modes with rotating precursors. A parameter formulated by the plasma internal inductance li divided by the safety factor at 95% of the toroidal flux, q95, is found to exhibit predictive capability over whether a locked mode will cause a disruption or not, and does so up to hundreds of milliseconds before the disruption. Within 20 ms of the disruption, the shortest distance between the island separatrix and the unperturbed last closed flux surface, referred to as dedge, performs comparably to l i/q95 in its ability to discriminate disruptive locked modes, and it also correlates well with the duration of the locked mode. On average, and within errors, the n=1 perturbed field grows exponentially in the final 50 ms before a disruption, however, the island width cannot discern whether a LM will disrupt or not up to 20 ms before the disruption. A few discharges are selected to analyze the evolution of the electron temperature profile in the presence of multiple coexisting locked modes during partial and full thermal quenches. Partial thermal quenches are often an initial, distinct stage in the full thermal quench caused by radiation, conduction, or convection losses. Here we explore the fundamental mechanism that causes the partial quench. Near the onset of partial thermal quenches, locked islands are observed to align in a unique way, or island widths are observed to grow above a threshold. Energy analysis on one discharge suggests that about half of the energy is lost in the divertor region. In discharges with minimum values of the safety factor above ˜1.2, and with current profiles expected to be classically stable, locked modes are observed to self-stabilize by inducing a full thermal quench, possibly by double tearing modes that remove the pressure gradient across the island, thus removing the neoclassical drive.
Medical error and related factors during internship and residency.
Ahmadipour, Habibeh; Nahid, Mortazavi
2015-01-01
It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.
Introduction to the Application of Web-Based Surveys.
ERIC Educational Resources Information Center
Timmerman, Annemarie
This paper discusses some basic assumptions and issues concerning web-based surveys. Discussion includes: assumptions regarding cost and ease of use; disadvantages of web-based surveys, concerning the inability to compensate for four common errors of survey research: coverage error, sampling error, measurement error and nonresponse error; and…
Neural Network Burst Pressure Prediction in Composite Overwrapped Pressure Vessels
NASA Technical Reports Server (NTRS)
Hill, Eric v. K.; Dion, Seth-Andrew T.; Karl, Justin O.; Spivey, Nicholas S.; Walker, James L., II
2007-01-01
Acoustic emission data were collected during the hydroburst testing of eleven 15 inch diameter filament wound composite overwrapped pressure vessels. A neural network burst pressure prediction was generated from the resulting AE amplitude data. The bottles shared commonality of graphite fiber, epoxy resin, and cure time. Individual bottles varied by cure mode (rotisserie versus static oven curing), types of inflicted damage, temperature of the pressurant, and pressurization scheme. Three categorical variables were selected to represent undamaged bottles, impact damaged bottles, and bottles with lacerated hoop fibers. This categorization along with the removal of the AE data from the disbonding noise between the aluminum liner and the composite overwrap allowed the prediction of burst pressures in all three sets of bottles using a single backpropagation neural network. Here the worst case error was 3.38 percent.
Multi-mode sliding mode control for precision linear stage based on fixed or floating stator.
Fang, Jiwen; Long, Zhili; Wang, Michael Yu; Zhang, Lufan; Dai, Xufei
2016-02-01
This paper presents the control performance of a linear motion stage driven by Voice Coil Motor (VCM). Unlike the conventional VCM, the stator of this VCM is regulated, which means it can be adjusted as a floating-stator or fixed-stator. A Multi-Mode Sliding Mode Control (MMSMC), including a conventional Sliding Mode Control (SMC) and an Integral Sliding Mode Control (ISMC), is designed to control the linear motion stage. The control is switched between SMC and IMSC based on the error threshold. To eliminate the chattering, a smooth function is adopted instead of a signum function. The experimental results with the floating stator show that the positioning accuracy and tracking performance of the linear motion stage are improved with the MMSMC approach.
Sources of error in the retracted scientific literature.
Casadevall, Arturo; Steen, R Grant; Fang, Ferric C
2014-09-01
Retraction of flawed articles is an important mechanism for correction of the scientific literature. We recently reported that the majority of retractions are associated with scientific misconduct. In the current study, we focused on the subset of retractions for which no misconduct was identified, in order to identify the major causes of error. Analysis of the retraction notices for 423 articles indexed in PubMed revealed that the most common causes of error-related retraction are laboratory errors, analytical errors, and irreproducible results. The most common laboratory errors are contamination and problems relating to molecular biology procedures (e.g., sequencing, cloning). Retractions due to contamination were more common in the past, whereas analytical errors are now increasing in frequency. A number of publications that have not been retracted despite being shown to contain significant errors suggest that barriers to retraction may impede correction of the literature. In particular, few cases of retraction due to cell line contamination were found despite recognition that this problem has affected numerous publications. An understanding of the errors leading to retraction can guide practices to improve laboratory research and the integrity of the scientific literature. Perhaps most important, our analysis has identified major problems in the mechanisms used to rectify the scientific literature and suggests a need for action by the scientific community to adopt protocols that ensure the integrity of the publication process. © FASEB.
Ulas, Arife; Silay, Kamile; Akinci, Sema; Dede, Didem Sener; Akinci, Muhammed Bulent; Sendur, Mehmet Ali Nahit; Cubukcu, Erdem; Coskun, Hasan Senol; Degirmenci, Mustafa; Utkan, Gungor; Ozdemir, Nuriye; Isikdogan, Abdurrahman; Buyukcelik, Abdullah; Inanc, Mevlude; Bilici, Ahmet; Odabasi, Hatice; Cihan, Sener; Avci, Nilufer; Yalcin, Bulent
2015-01-01
Medication errors in oncology may cause severe clinical problems due to low therapeutic indices and high toxicity of chemotherapeutic agents. We aimed to investigate unintentional medication errors and underlying factors during chemotherapy preparation and administration based on a systematic survey conducted to reflect oncology nurses experience. This study was conducted in 18 adult chemotherapy units with volunteer participation of 206 nurses. A survey developed by primary investigators and medication errors (MAEs) defined preventable errors during prescription of medication, ordering, preparation or administration. The survey consisted of 4 parts: demographic features of nurses; workload of chemotherapy units; errors and their estimated monthly number during chemotherapy preparation and administration; and evaluation of the possible factors responsible from ME. The survey was conducted by face to face interview and data analyses were performed with descriptive statistics. Chi-square or Fisher exact tests were used for a comparative analysis of categorical data. Some 83.4% of the 210 nurses reported one or more than one error during chemotherapy preparation and administration. Prescribing or ordering wrong doses by physicians (65.7%) and noncompliance with administration sequences during chemotherapy administration (50.5%) were the most common errors. The most common estimated average monthly error was not following the administration sequence of the chemotherapeutic agents (4.1 times/month, range 1-20). The most important underlying reasons for medication errors were heavy workload (49.7%) and insufficient number of staff (36.5%). Our findings suggest that the probability of medication error is very high during chemotherapy preparation and administration, the most common involving prescribing and ordering errors. Further studies must address the strategies to minimize medication error in chemotherapy receiving patients, determine sufficient protective measures and establishing multistep control mechanisms.
NASA Astrophysics Data System (ADS)
Davis, A. B.; Qu, Z.
2014-12-01
The main goal of NASA's OCO-2 mission is to perform XCO2 column measurements from space with an unprecedented (~1 ppm) precision and accuracy that will enable modelers to globally map CO2 sources and sinks. To achieve this goal, the mission is critically dependent on XCO2product validation that, in turn, is highly dependent on successful use of OCO-2's "target mode" data acquisition. In target mode, OCO-2 rotates in such a way that, as long as it is above the horizon, it looks at a Total Carbon Column Observing Network (TCCON) station equipped with a powerful Fourier Transform spectrometer. TCCON stations measure, among other things, XCO2by looking straight at the Sun. This translates to a far simpler forward model for TCCON than for OCO-2. In the ideal world, OCO-2's spectroscopic signals result from the cumulative gaseous absorption for one direct transmission of sunlight to the ground (like for TCCON), followed by one diffuse reflection, and one direct transmission to the instrument—at a variety of viewing angles in traget mode. In the real world, all manner of multiple surface reflections and/or scatterings contribute to the signal. See figure. In the idealized world of the OCO-2 operational forward model (used in nadir, glint and target modes), the horizontal variability of the scattering atmosphere and reflecting surface are ignored, leading to the adoption of a 1D vector radiative transfer (vRT) model. This is the source of forward model error that we are investigating, with a focus on target mode. In principle, atmospheric variability in the horizontal plane—largely due to clouds—can be avoided by careful screening. Also, it is straightforward to account for angular variability of the surface reflection model in the 1D vRT framework. But it is not clear how unavoidable horizontal variations of the surface reflectivity affects the OCO-2 signal, even if the reflection was isotropic (Lambertian). To characterize this OCO-2 "adjacency" effect, we use a simple surface variability model with a single spatial frequency in each direction, and a single albedo contrast at a time for realistic aerosol and gaseous profiles. This specific 3D RT error is compared with other documented forward model errors and translated into XCO2 error in ppm, for programatic consideration and eventual mitigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comandi, G.L.; Toncelli, R.; Chiofalo, M.L.
'Galileo Galilei on the ground' (GGG) is a fast rotating differential accelerometer designed to test the equivalence principle (EP). Its sensitivity to differential effects, such as the effect of an EP violation, depends crucially on the capability of the accelerometer to reject all effects acting in common mode. By applying the theoretical and simulation methods reported in Part I of this work, and tested therein against experimental data, we predict the occurrence of an enhanced common mode rejection of the GGG accelerometer. We demonstrate that the best rejection of common mode disturbances can be tuned in a controlled way bymore » varying the spin frequency of the GGG rotor.« less
Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance
NASA Astrophysics Data System (ADS)
Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman
2016-02-01
The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayler, E; Harrison, A; Eldredge-Hindy, H
Purpose: and Leipzig applicators (VLAs) are single-channel brachytherapy surface applicators used to treat skin lesions up to 2cm diameter. Source dwell times can be calculated and entered manually after clinical set-up or ultrasound. This procedure differs dramatically from CT-based planning; the novelty and unfamiliarity could lead to severe errors. To build layers of safety and ensure quality, a multidisciplinary team created a protocol and applied Failure Modes and Effects Analysis (FMEA) to the clinical procedure for HDR VLA skin treatments. Methods: team including physicists, physicians, nurses, therapists, residents, and administration developed a clinical procedure for VLA treatment. The procedure wasmore » evaluated using FMEA. Failure modes were identified and scored by severity, occurrence, and detection. The clinical procedure was revised to address high-scoring process nodes. Results: Several key components were added to the clinical procedure to minimize risk probability numbers (RPN): -Treatments are reviewed at weekly QA rounds, where physicians discuss diagnosis, prescription, applicator selection, and set-up. Peer review reduces the likelihood of an inappropriate treatment regime. -A template for HDR skin treatments was established in the clinical EMR system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planning physicist, and increases the detectability of an error during the physics second check. -A screen check was implemented during the second check to increase detectability of an error. -To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display. This facilitates data entry and verification. -VLAs are color-coded and labeled to match the EMR prescriptions, which simplifies in-room selection and verification. Conclusion: Multidisciplinary planning and FMEA increased delectability and reduced error probability during VLA HDR Brachytherapy. This clinical model may be useful to institutions implementing similar procedures.« less
The Relationship Between Technical Errors and Decision Making Skills in the Junior Resident
Nathwani, J. N.; Fiers, R.M.; Ray, R.D.; Witt, A.K.; Law, K. E.; DiMarco, S.M.; Pugh, C.M.
2017-01-01
Objective The purpose of this study is to co-evaluate resident technical errors and decision-making capabilities during placement of a subclavian central venous catheter (CVC). We hypothesize that there will be significant correlations between scenario based decision making skills, and technical proficiency in central line insertion. We also predict residents will have problems in anticipating common difficulties and generating solutions associated with line placement. Design Participants were asked to insert a subclavian central line on a simulator. After completion, residents were presented with a real life patient photograph depicting CVC placement and asked to anticipate difficulties and generate solutions. Error rates were analyzed using chi-square tests and a 5% expected error rate. Correlations were sought by comparing technical errors and scenario based decision making. Setting This study was carried out at seven tertiary care centers. Participants Study participants (N=46) consisted of largely first year research residents that could be followed longitudinally. Second year research and clinical residents were not excluded. Results Six checklist errors were committed more often than anticipated. Residents performed an average of 1.9 errors, significantly more than the 1 error, at most, per person expected (t(44)=3.82, p<.001). The most common error was performance of the procedure steps in the wrong order (28.5%, P<.001). Some of the residents (24%) had no errors, 30% committed one error, and 46 % committed more than one error. The number of technical errors committed negatively correlated with the total number of commonly identified difficulties and generated solutions (r(33)= −.429, p=.021, r(33)= −.383, p=.044 respectively). Conclusions Almost half of the surgical residents committed multiple errors while performing subclavian CVC placement. The correlation between technical errors and decision making skills suggests a critical need to train residents in both technique and error management. ACGME Competencies Medical Knowledge, Practice Based Learning and Improvement, Systems Based Practice PMID:27671618
Creating a Test Validated Structural Dynamic Finite Element Model of the X-56A Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truong, Samson
2014-01-01
Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of the Multi Utility Technology Test-bed, X-56A aircraft, is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of the X-56A aircraft. The ground vibration test-validated structural dynamic finite element model of the X-56A aircraft is created in this study. The structural dynamic finite element model of the X-56A aircraft is improved using a model tuning tool. In this study, two different weight configurations of the X-56A aircraft have been improved in a single optimization run. Frequency and the cross-orthogonality (mode shape) matrix were the primary focus for improvement, while other properties such as center of gravity location, total weight, and offdiagonal terms of the mass orthogonality matrix were used as constraints. The end result was a more improved and desirable structural dynamic finite element model configuration for the X-56A aircraft. Improved frequencies and mode shapes in this study increased average flutter speeds of the X-56A aircraft by 7.6% compared to the baseline model.
Creating a Test-Validated Finite-Element Model of the X-56A Aircraft Structure
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truong, Samson
2014-01-01
Small modeling errors in a finite-element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of the X-56A Multi-Utility Technology Testbed aircraft is the flight demonstration of active flutter suppression and, therefore, in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of the X-56A aircraft. The ground-vibration test-validated structural dynamic finite-element model of the X-56A aircraft is created in this study. The structural dynamic finite-element model of the X-56A aircraft is improved using a model-tuning tool. In this study, two different weight configurations of the X-56A aircraft have been improved in a single optimization run. Frequency and the cross-orthogonality (mode shape) matrix were the primary focus for improvement, whereas other properties such as c.g. location, total weight, and off-diagonal terms of the mass orthogonality matrix were used as constraints. The end result was an improved structural dynamic finite-element model configuration for the X-56A aircraft. Improved frequencies and mode shapes in this study increased average flutter speeds of the X-56A aircraft by 7.6% compared to the baseline model.
NASA Astrophysics Data System (ADS)
De Lorenzo, Danilo; De Momi, Elena; Beretta, Elisa; Cerveri, Pietro; Perona, Franco; Ferrigno, Giancarlo
2009-02-01
Computer Assisted Orthopaedic Surgery (CAOS) systems improve the results and the standardization of surgical interventions. Anatomical landmarks and bone surface detection is straightforward to either register the surgical space with the pre-operative imaging space and to compute biomechanical parameters for prosthesis alignment. Surface points acquisition increases the intervention invasiveness and can be influenced by the soft tissue layer interposition (7-15mm localization errors). This study is aimed at evaluating the accuracy of a custom-made A-mode ultrasound (US) system for non invasive detection of anatomical landmarks and surfaces. A-mode solutions eliminate the necessity of US images segmentation, offers real-time signal processing and requires less invasive equipment. The system consists in a single transducer US probe optically tracked, a pulser/receiver and an FPGA-based board, which is responsible for logic control command generation and for real-time signal processing and three custom-made board (signal acquisition, blanking and synchronization). We propose a new calibration method of the US system. The experimental validation was then performed measuring the length of known-shape polymethylmethacrylate boxes filled with pure water and acquiring bone surface points on a bovine bone phantom covered with soft-tissue mimicking materials. Measurement errors were computed through MR and CT images acquisitions of the phantom. Points acquisition on bone surface with the US system demonstrated lower errors (1.2mm) than standard pointer acquisition (4.2mm).
Nápoles, Anna M.; Santoyo-Olsson, Jasmine; Karliner, Leah S.; Gregorich, Steven E.; Pérez-Stable, Eliseo J.
2015-01-01
Background Limited English-proficient (LEP) patients suffer poorer quality of care and outcomes. Interpreters can ameliorate these disparities; however, evidence is lacking on the quality of different interpretation modes. Objective Compare accuracy of interpretation for in-person professional (IP), professional videoconferencing (VC), and ad hoc interpretation (AH). Design Cross-sectional study of transcribed audiotaped primary care visits Subjects 32 Spanish-speaking Latino patients; 14 clinicians Measures Independent coding of transcripts by four coders (two were internists) for accurate and inaccurate interpretation instances. Unit of analysis was a segment of continuous speech or text unit (TU). Two internists independently verified inaccurate interpretation instances and rated their clinical significance as clinically insignificant, mildly, moderately or highly clinically significant. Results Accurate interpretation made up 70% of total coded TUs and inaccurate interpretation (errors) made up 30%. Inaccurate interpretation occurred at twice the rate for AH (54% of coded TUs) versus IP (25%) and VC (23%) interpretation, due to more errors of omission (p<0.001) and answers for patient or clinician (p<0.001). Mean number of errors per visit was 27, with 7.1% of errors rated as moderately/highly clinically significant. In adjusted models, the odds of inaccurate interpretation were lower for IP (OR = −1.25, 95% CI −1.56, −0.95) and VC (OR = −1.05; 95% CI −1.26, −0.84) than for AH interpreted visits; the odds of a moderately/highly clinically significant error were lower for IP (OR = −0.06; 95% CI −1.05, 0.92) than for AH interpreted visits. Conclusions Inaccurate language interpretation in medical encounters is common and more frequent when untrained interpreters are used compared to professional in-person or via videoconferencing. Professional video conferencing interpretation may increase access to higher quality medical interpretation services. PMID:26465121
Sapkota, K; Pirouzian, A; Matta, N S
2013-01-01
Refractive error is a common cause of amblyopia. To determine prevalence of amblyopia and the pattern and the types of refractive error in children with amblyopia in a tertiary eye hospital of Nepal. A retrospective chart review of children diagnosed with amblyopia in the Nepal Eye Hospital (NEH) from July 2006 to June 2011 was conducted. Children of age 13+ or who had any ocular pathology were excluded. Cycloplegic refraction and an ophthalmological examination was performed for all children. The pattern of refractive error and the association between types of refractive error and types of amblyopia were determined. Amblyopia was found in 0.7 % (440) of 62,633 children examined in NEH during this period. All the amblyopic eyes of the subjects had refractive error. Fifty-six percent (248) of the patients were male and the mean age was 7.74 ± 2.97 years. Anisometropia was the most common cause of amblyopia (p less than 0.001). One third (29 %) of the subjects had bilateral amblyopia due to high ametropia. Forty percent of eyes had severe amblyopia with visual acuity of 20/120 or worse. About twothirds (59.2 %) of the eyes had astigmatism. The prevalence of amblyopia in the Nepal Eye Hospital is 0.7%. Anisometropia is the most common cause of amblyopia. Astigmatism is the most common types of refractive error in amblyopic eyes. © NEPjOPH.
W-Band Circularly Polarized TE11 Mode Transducer
NASA Astrophysics Data System (ADS)
Zhan, Mingzhou; He, Wangdong; Wang, Lei
2018-06-01
This paper presents a balanced sidewall exciting approach to realize the circularly polarized TE11 mode transducer. We used a voltage vector transfer matrix to establish the relationship between input and output vectors, then we analyzed amplitude and phase errors to estimate the isolation of degenerate mode. A mode transducer with a sidewall exciter was designed based on the results. In the 88-100 GHz frequency range, the simulated axial ratio is less than 1.05 and the isolation of linearly polarization TE11 mode is higher than 30 dBc. In back-to-back measurements, the return loss is generally greater than 20 dB with a typical insertion loss of 1.2 dB. Back-to-back transmission measurements are in excellent agreement with simulations.
W-Band Circularly Polarized TE11 Mode Transducer
NASA Astrophysics Data System (ADS)
Zhan, Mingzhou; He, Wangdong; Wang, Lei
2018-04-01
This paper presents a balanced sidewall exciting approach to realize the circularly polarized TE11 mode transducer. We used a voltage vector transfer matrix to establish the relationship between input and output vectors, then we analyzed amplitude and phase errors to estimate the isolation of degenerate mode. A mode transducer with a sidewall exciter was designed based on the results. In the 88-100 GHz frequency range, the simulated axial ratio is less than 1.05 and the isolation of linearly polarization TE11 mode is higher than 30 dBc. In back-to-back measurements, the return loss is generally greater than 20 dB with a typical insertion loss of 1.2 dB. Back-to-back transmission measurements are in excellent agreement with simulations.
Fine-particle pH for Beijing winter haze as inferred from different thermodynamic equilibrium models
NASA Astrophysics Data System (ADS)
Song, Shaojie; Gao, Meng; Xu, Weiqi; Shao, Jingyuan; Shi, Guoliang; Wang, Shuxiao; Wang, Yuxuan; Sun, Yele; McElroy, Michael B.
2018-05-01
pH is an important property of aerosol particles but is difficult to measure directly. Several studies have estimated the pH values for fine particles in northern China winter haze using thermodynamic models (i.e., E-AIM and ISORROPIA) and ambient measurements. The reported pH values differ widely, ranging from close to 0 (highly acidic) to as high as 7 (neutral). In order to understand the reason for this discrepancy, we calculated pH values using these models with different assumptions with regard to model inputs and particle phase states. We find that the large discrepancy is due primarily to differences in the model assumptions adopted in previous studies. Calculations using only aerosol-phase composition as inputs (i.e., reverse mode) are sensitive to the measurement errors of ionic species, and inferred pH values exhibit a bimodal distribution, with peaks between -2 and 2 and between 7 and 10, depending on whether anions or cations are in excess. Calculations using total (gas plus aerosol phase) measurements as inputs (i.e., forward mode) are affected much less by these measurement errors. In future studies, the reverse mode should be avoided whereas the forward mode should be used. Forward-mode calculations in this and previous studies collectively indicate a moderately acidic condition (pH from about 4 to about 5) for fine particles in northern China winter haze, indicating further that ammonia plays an important role in determining this property. The assumed particle phase state, either stable (solid plus liquid) or metastable (only liquid), does not significantly impact pH predictions. The unrealistic pH values of about 7 in a few previous studies (using the standard ISORROPIA model and stable state assumption) resulted from coding errors in the model, which have been identified and fixed in this study.
Search for Long Period Solar Normal Modes in Ambient Seismic Noise
NASA Astrophysics Data System (ADS)
Caton, R.; Pavlis, G. L.
2016-12-01
We search for evidence of solar free oscillations (normal modes) in long period seismic data through multitaper spectral analysis of array stacks. This analysis is similar to that of Thomson & Vernon (2015), who used data from the most quiet single stations of the global seismic network. Our approach is to use stacks of large arrays of noisier stations to reduce noise. Arrays have the added advantage of permitting the use of nonparametic statistics (jackknife errors) to provide objective error estimates. We used data from the Transportable Array, the broadband borehole array at Pinyon Flat, and the 3D broadband array in Homestake Mine in Lead, SD. The Homestake Mine array has 15 STS-2 sensors deployed in the mine that are extremely quiet at long periods due to stable temperatures and stable piers anchored to hard rock. The length of time series used ranged from 50 days to 85 days. We processed the data by low-pass filtering with a corner frequency of 10 mHz, followed by an autoregressive prewhitening filter and median stack. We elected to use the median instead of the mean in order to get a more robust stack. We then used G. Prieto's mtspec library to compute multitaper spectrum estimates on the data. We produce delete-one jackknife error estimates of the uncertainty at each frequency by computing median stacks of all data with one station removed. The results from the TA data show tentative evidence for several lines between 290 μHz and 400 μHz, including a recurring line near 379 μHz. This 379 μHz line is near the Earth mode 0T2 and the solar mode 5g5, suggesting that 5g5 could be coupling into the Earth mode. Current results suggest more statistically significant lines may be present in Pinyon Flat data, but additional processing of the data is underway to confirm this observation.
Orthogonal control of the frequency comb dynamics of a mode-locked laser diode.
Holman, Kevin W; Jones, David J; Ye, Jun; Ippen, Erich P
2003-12-01
We have performed detailed studies on the dynamics of a frequency comb produced by a mode-locked laser diode (MLLD). Orthogonal control of the pulse repetition rate and the pulse-to-pulse carrier-envelope phase slippage is achieved by appropriate combinations of the respective error signals to actuate the diode injection current and the saturable absorber bias voltage. Phase coherence is established between the MLLD at 1550 nm and a 775-nm mode-locked Ti:sapphire laser working as part of an optical atomic clock.
Feedback attitude sliding mode regulation control of spacecraft using arm motion
NASA Astrophysics Data System (ADS)
Shi, Ye; Liang, Bin; Xu, Dong; Wang, Xueqian; Xu, Wenfu
2013-09-01
The problem of spacecraft attitude regulation based on the reaction of arm motion has attracted extensive attentions from both engineering and academic fields. Most of the solutions of the manipulator’s motion tracking problem just achieve asymptotical stabilization performance, so that these controllers cannot realize precise attitude regulation because of the existence of non-holonomic constraints. Thus, sliding mode control algorithms are adopted to stabilize the tracking error with zero transient process. Due to the switching effects of the variable structure controller, once the tracking error reaches the designed hyper-plane, it will be restricted to this plane permanently even with the existence of external disturbances. Thus, precise attitude regulation can be achieved. Furthermore, taking the non-zero initial tracking errors and chattering phenomenon into consideration, saturation functions are used to replace sign functions to smooth the control torques. The relations between the upper bounds of tracking errors and the controller parameters are derived to reveal physical characteristic of the controller. Mathematical models of free-floating space manipulator are established and simulations are conducted in the end. The results show that the spacecraft’s attitude can be regulated to the position as desired by using the proposed algorithm, the steady state error is 0.000 2 rad. In addition, the joint tracking trajectory is smooth, the joint tracking errors converges to zero quickly with a satisfactory continuous joint control input. The proposed research provides a feasible solution for spacecraft attitude regulation by using arm motion, and improves the precision of the spacecraft attitude regulation.
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Bit-error rate for free-space adaptive optics laser communications.
Tyson, Robert K
2002-04-01
An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.
Performance of the Keck Observatory adaptive-optics system.
van Dam, Marcos A; Le Mignant, David; Macintosh, Bruce A
2004-10-10
The adaptive-optics (AO) system at the W. M. Keck Observatory is characterized. We calculate the error budget of the Keck AO system operating in natural guide star mode with a near-infrared imaging camera. The measurement noise and bandwidth errors are obtained by modeling the control loops and recording residual centroids. Results of sky performance tests are presented: The AO system is shown to deliver images with average Strehl ratios of as much as 0.37 at 1.58 microm when a bright guide star is used and of 0.19 for a magnitude 12 star. The images are consistent with the predicted wave-front error based on our error budget estimates.
Orphanidou, Christina
2017-02-01
A new method for extracting the respiratory rate from ECG and PPG obtained via wearable sensors is presented. The proposed technique employs Ensemble Empirical Mode Decomposition in order to identify the respiration "mode" from the noise-corrupted Heart Rate Variability/Pulse Rate Variability and Amplitude Modulation signals extracted from ECG and PPG signals. The technique was validated with respect to a Respiratory Impedance Pneumography (RIP) signal using the mean absolute and the average relative errors for a group ambulatory hospital patients. We compared approaches using single respiration-induced modulations on the ECG and PPG signals with approaches fusing the different modulations. Additionally, we investigated whether the presence of both the simultaneously recorded ECG and PPG signals provided a benefit in the overall system performance. Our method outperformed state-of-the-art ECG- and PPG-based algorithms and gave the best results over the whole database with a mean error of 1.8bpm for 1min estimates when using the fused ECG modulations, which was a relative error of 10.3%. No statistically significant differences were found when comparing the ECG-, PPG- and ECG/PPG-based approaches, indicating that the PPG can be used as a valid alternative to the ECG for applications using wearable sensors. While the presence of both the ECG and PPG signals did not provide an improvement in the estimation error, it increased the proportion of windows for which an estimate was obtained by at least 9%, indicating that the use of two simultaneously recorded signals might be desirable in high-acuity cases where an RR estimate is required more frequently. Copyright © 2016 Elsevier Ltd. All rights reserved.
... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vazquez Jauregui, Eric
2008-08-01
We studied several Ξ c + decay modes, most of them with a hyperon in the final state, and determined their branching ratios. The data used in this analysis come from the fixed target experiment SELEX, a multi-stage spectrometer with high acceptance for forward interactions, that took data during 1996 and 1997 at Fermilab with 600 GeV=c (mainly Σ -, π -) and 540 GeV/c (mainly p) beams incident on copper and carbon targets. The thesis mainly details the first observation of two Cabibbo-suppressed decay modes, Ξ c + → Σ +π -π + and Ξ c + → Σmore » -π +π +. The branching ratios of the decays relative to the Cabibbo-favored Ξ c + → Σ -π +π + are measured to be: Γ(Ξ c + → Σ -π +π +)/Γ(Ξ c + → Ξ -π +π +) = 0.184 ± 0.086. Systematic studies have been performed in order to check the stability of the measurements varying all cuts used in the selection of events over a wide interval and we do not observe evidence of any trend, so the systematic error is negligible in the final results because the quadrature sum of the total error is not affected. The branching ratios for the same decay modes of the Λ c + are measured to check the methodology of the analysis. The branching ratio of the decay mode Λ c + → Σ +π -π + is measured relative to Λ c + → pK - π +, while the one of the decay mode Λ c + → Σ -π +π +is relative to Λ c +→ Σ +π -π +, as they have been reported earlier. The results for the control modes are: Γ(Λ c +→ Σ +π -π +)/Γ(Λ c + → pK - π +) = 0.716 ± 0.144 and Γ(Λ c +→ Σ -π +π +)/Γ(Λ c + → Σ +π -π +) = 0.382 ± 0.104. The branching ratio of the decay mode Ξ c + → pK - π + relative to Ξ c + → Ξ -π +π + is considered as another control mode, the measured value is Γ(Ξ c + → pK -π +)/Γ(Ξ c + → Ξ -π +π +) = 0.194 ± 0.054. Systematic studies have been also performed for the control modes and all systematic variations are also small compared to the statistical error. We also report the first observation of two more decay modes, the Cabibbo-suppressed decay Ξ c + → Σ - K +π + and the doubly Cabibbo-suppressed decay Ξ c + → Σ +K +π -, but their branching ratios have not been measured up to now.« less
NASA Astrophysics Data System (ADS)
Chaudhary, Sushank; Amphawan, Angela
2018-07-01
Radio over free space (Ro-FSO) provides an ambitious platform for seamless integration of radio networks to optical networks. Three independent channels, each carrying 2.5 Gbps–5 GHz data, are successfully transmitted over a free space link of 2.5 km by using mode division multiplexing (MDM) of three modes LG 00, LG 01, and LG 02 modes in conjunction with solid core photonic crystal fibers (SC-PCFs). Moreover, SC-PCFs are used as a mode selector in the proposed MDM-Ro-FSO system. The results are reported in terms of bit error rate, mode spectrum, and spatial profiles. The performance of the proposed Ro-FSO system is also evaluated under the influence of atmospheric turbulence in the form of different levels of fog, namely, light fog, thin fog, and heavy fog.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilson, Erik P.; Davidson, Ronald C.; Efthimion, Philip C.
Transverse dipole and quadrupole modes have been excited in a one-component cesium ion plasma trapped in the Paul Trap Simulator Experiment (PTSX) in order to characterize their properties and understand the effect of their excitation on equivalent long-distance beam propagation. The PTSX device is a compact laboratory Paul trap that simulates the transverse dynamics of a long, intense charge bunch propagating through an alternating-gradient transport system by putting the physicist in the beam's frame of reference. A pair of arbitrary function generators was used to apply trapping voltage waveform perturbations with a range of frequencies and, by changing which electrodesmore » were driven with the perturbation, with either a dipole or quadrupole spatial structure. The results presented in this paper explore the dependence of the perturbation voltage's effect on the perturbation duration and amplitude. Perturbations were also applied that simulate the effect of random lattice errors that exist in an accelerator with quadrupole magnets that are misaligned or have variance in their field strength. The experimental results quantify the growth in the equivalent transverse beam emittance that occurs due to the applied noise and demonstrate that the random lattice errors interact with the trapped plasma through the plasma's internal collective modes. Coherent periodic perturbations were applied to simulate the effects of magnet errors in circular machines such as storage rings. The trapped one component plasma is strongly affected when the perturbation frequency is commensurate with a plasma mode frequency. The experimental results, which help to understand the physics of quiescent intense beam propagation over large distances, are compared with analytic models.« less
Dynamic performance of MEMS deformable mirrors for use in an active/adaptive two-photon microscope
NASA Astrophysics Data System (ADS)
Zhang, Christian C.; Foster, Warren B.; Downey, Ryan D.; Arrasmith, Christopher L.; Dickensheets, David L.
2016-03-01
Active optics can facilitate two-photon microscopic imaging deep in tissue. We are investigating fast focus control mirrors used in concert with an aberration correction mirror to control the axial position of focus and system aberrations dynamically during scanning. With an adaptive training step, sample-induced aberrations may be compensated as well. If sufficiently fast and precise, active optics may be able to compensate under-corrected imaging optics as well as sample aberrations to maintain diffraction-limited performance throughout the field of view. Toward this end we have measured a Boston Micromachines Corporation Multi-DM 140 element deformable mirror, and a Revibro Optics electrostatic 4-zone focus control mirror to characterize dynamic performance. Tests for the Multi-DM included both step response and sinusoidal frequency sweeps of specific Zernike modes. For the step response we measured 10%-90% rise times for the target Zernike amplitude, and wavefront rms error settling times. Frequency sweeps identified the 3dB bandwidth of the mirror when attempting to follow a sinusoidal amplitude trajectory for a specific Zernike mode. For five tested Zernike modes (defocus, spherical aberration, coma, astigmatism and trefoil) we find error settling times for mode amplitudes up to 400nm to be less than 52 us, and 3 dB frequencies range from 6.5 kHz to 10 kHz. The Revibro Optics mirror was tested for step response only, with error settling time of 80 μs for a large 3 um defocus step, and settling time of only 18 μs for a 400nm spherical aberration step. These response speeds are sufficient for intra-scan correction at scan rates typical of two-photon microscopy.
Random safety auditing, root cause analysis, failure mode and effects analysis.
Ursprung, Robert; Gray, James
2010-03-01
Improving quality and safety in health care is a major concern for health care providers, the general public, and policy makers. Errors and quality issues are leading causes of morbidity and mortality across the health care industry. There is evidence that patients in the neonatal intensive care unit (NICU) are at high risk for serious medical errors. To facilitate compliance with safe practices, many institutions have established quality-assurance monitoring procedures. Three techniques that have been found useful in the health care setting are failure mode and effects analysis, root cause analysis, and random safety auditing. When used together, these techniques are effective tools for system analysis and redesign focused on providing safe delivery of care in the complex NICU system. Copyright 2010 Elsevier Inc. All rights reserved.
Schrodinger's catapult II: entanglement between stationary and flying fields
NASA Astrophysics Data System (ADS)
Pfaff, W.; Axline, C.; Burkhart, L.; Vool, U.; Reinhold, P.; Frunzio, L.; Jiang, L.; Devoret, M.; Schoelkopf, R.
Entanglement between nodes is an elementary resource in a quantum network. An important step towards its realization is entanglement between stationary and flying states. Here we experimentally demonstrate entanglement generation between a long-lived cavity memory and traveling mode in circuit QED. A large on/off ratio and fast control over a parametric mixing process allow us to realize conversion with tunable magnitude and duration between standing and flying mode. In the case of half-conversion, we observe correlations between the standing and flying state that confirm the generation of entangled states. We show this for both single-photon and multi-photon states, paving the way for error-correctable remote entanglement. Our system could serve as an essential component in a modular architecture for error-protected quantum information processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, K.; Chen, H.; Wu, W.
We present that in the upgrade of ATLAS experiment, the front-end electronics components are subjected to a large radiation background. Meanwhile high speed optical links are required for the data transmission between the on-detector and off-detector electronics. The GBT architecture and the Versatile Link (VL) project are designed by CERN to support the 4.8 Gbps line rate bidirectional high-speed data transmission which is called GBT link. In the ATLAS upgrade, besides the link with on-detector, the GBT link is also used between different off-detector systems. The GBTX ASIC is designed for the on-detector front-end, correspondingly for the off-detector electronics, themore » GBT architecture is implemented in Field Programmable Gate Arrays (FPGA). CERN launches the GBT-FPGA project to provide examples in different types of FPGA. In the ATLAS upgrade framework, the Front-End LInk eXchange (FELIX) system is used to interface the front end electronics of several ATLAS subsystems. The GBT link is used between them, to transfer the detector data and the timing, trigger, control and monitoring information. The trigger signal distributed in the down-link from FELIX to the front-end requires a fixed and low latency. In this paper, several optimizations on the GBT-FPGA IP core are introduced, to achieve a lower fixed latency. For FELIX, a common firmware will be used to interface different front-ends with support of both GBT modes: the forward error correction mode and the wide mode. The modified GBT-FPGA core has the ability to switch between the GBT modes without FPGA reprogramming. Finally, the system clock distribution of the multi-channel FELIX firmware is also discussed in this paper.« less
Geodetic integration of Sentinel-1A IW data using PSInSAR in Hungary
NASA Astrophysics Data System (ADS)
Farkas, Péter; Hevér, Renáta; Grenerczy, Gyula
2015-04-01
ESA's latest Synthetic Aperture Radar (SAR) mission Sentinel-1 is a huge step forward in SAR interferometry. With its default acquisition mode called the Interferometric Wide Swath Mode (IW) areas through all scales can be mapped with an excellent return time of 12 days (while only the Sentinel-1A is in orbit). Its operational data policy is also a novelty, it allows scientific users free and unlimited access to data. It implements a new type of ScanSAR mode called Terrain Observation with Progressive Scan (TOPS) SAR. It has the same resolution as ScanSAR but with better signal-to-noise ratio distribution. The bigger coverage is achieved by rotation of the antenna in the azimuth direction, therefore it requires very precise co-registration because even errors under a pixel accuracy can introduce azimuth phase variations caused by differences in Doppler-centroids. In our work we will summarize the benefits and the drawbacks of the IW mode. We would like to implement the processing chain of GAMMA Remote Sensing of such data for mapping surface motion with special attention to the co-registration step. Not only traditional InSAR but the advanced method of Persistent Scatterer InSAR (PSInSAR) will be performed and presented as well. PS coverage, along with coherence, is expected to be good due to the small perpendicular and temporal baselines. We would also like to integrate these measurements into national geodetic networks using common reference points. We have installed trihedral corner reflectors at some selected sites to aid precise collocation. Thus, we aim to demonstrate that Sentinel-1 can be effectively used for surface movement detection and monitoring and it can also provide valuable information for the improvement of our networks.
Optimization on fixed low latency implementation of the GBT core in FPGA
Chen, K.; Chen, H.; Wu, W.; ...
2017-07-11
We present that in the upgrade of ATLAS experiment, the front-end electronics components are subjected to a large radiation background. Meanwhile high speed optical links are required for the data transmission between the on-detector and off-detector electronics. The GBT architecture and the Versatile Link (VL) project are designed by CERN to support the 4.8 Gbps line rate bidirectional high-speed data transmission which is called GBT link. In the ATLAS upgrade, besides the link with on-detector, the GBT link is also used between different off-detector systems. The GBTX ASIC is designed for the on-detector front-end, correspondingly for the off-detector electronics, themore » GBT architecture is implemented in Field Programmable Gate Arrays (FPGA). CERN launches the GBT-FPGA project to provide examples in different types of FPGA. In the ATLAS upgrade framework, the Front-End LInk eXchange (FELIX) system is used to interface the front end electronics of several ATLAS subsystems. The GBT link is used between them, to transfer the detector data and the timing, trigger, control and monitoring information. The trigger signal distributed in the down-link from FELIX to the front-end requires a fixed and low latency. In this paper, several optimizations on the GBT-FPGA IP core are introduced, to achieve a lower fixed latency. For FELIX, a common firmware will be used to interface different front-ends with support of both GBT modes: the forward error correction mode and the wide mode. The modified GBT-FPGA core has the ability to switch between the GBT modes without FPGA reprogramming. Finally, the system clock distribution of the multi-channel FELIX firmware is also discussed in this paper.« less
Optimization on fixed low latency implementation of the GBT core in FPGA
NASA Astrophysics Data System (ADS)
Chen, K.; Chen, H.; Wu, W.; Xu, H.; Yao, L.
2017-07-01
In the upgrade of ATLAS experiment [1], the front-end electronics components are subjected to a large radiation background. Meanwhile high speed optical links are required for the data transmission between the on-detector and off-detector electronics. The GBT architecture and the Versatile Link (VL) project are designed by CERN to support the 4.8 Gbps line rate bidirectional high-speed data transmission which is called GBT link [2]. In the ATLAS upgrade, besides the link with on-detector, the GBT link is also used between different off-detector systems. The GBTX ASIC is designed for the on-detector front-end, correspondingly for the off-detector electronics, the GBT architecture is implemented in Field Programmable Gate Arrays (FPGA). CERN launches the GBT-FPGA project to provide examples in different types of FPGA [3]. In the ATLAS upgrade framework, the Front-End LInk eXchange (FELIX) system [4, 5] is used to interface the front-end electronics of several ATLAS subsystems. The GBT link is used between them, to transfer the detector data and the timing, trigger, control and monitoring information. The trigger signal distributed in the down-link from FELIX to the front-end requires a fixed and low latency. In this paper, several optimizations on the GBT-FPGA IP core are introduced, to achieve a lower fixed latency. For FELIX, a common firmware will be used to interface different front-ends with support of both GBT modes: the forward error correction mode and the wide mode. The modified GBT-FPGA core has the ability to switch between the GBT modes without FPGA reprogramming. The system clock distribution of the multi-channel FELIX firmware is also discussed in this paper.
Yang, F; Cao, N; Young, L; Howard, J; Logan, W; Arbuckle, T; Sponseller, P; Korssjoen, T; Meyer, J; Ford, E
2015-06-01
Though failure mode and effects analysis (FMEA) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge, its output has never been validated against data on errors that actually occur. The objective of this study was to perform FMEA of a stereotactic body radiation therapy (SBRT) treatment planning process and validate the results against data recorded within an incident learning system. FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, dosimetrists, and IT technologists. Potential failure modes were identified through a systematic review of the process map. Failure modes were rated for severity, occurrence, and detectability on a scale of one to ten and risk priority number (RPN) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that has been active for two and a half years. Differences between FMEA anticipated failure modes and existing incidents were identified. FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. Combining both methods yielded a total of 76 possible process failures, of which 13 (17%) were missed by FMEA while 43 (57%) identified by FMEA only. When scored for RPN, the 13 events missed by FMEA ranked within the lower half of all failure modes and exhibited significantly lower severity relative to those identified by FMEA (p = 0.02). FMEA, though valuable, is subject to certain limitations. In this study, FMEA failed to identify 17% of actual failure modes, though these were of lower risk. Similarly, an incident learning system alone fails to identify a large number of potentially high-severity process errors. Using FMEA in combination with incident learning may render an improved overview of risks within a process.
Sada, Oumer; Melkie, Addisu; Shibeshi, Workineh
2015-09-16
Medication errors (MEs) are important problems in all hospitalized populations, especially in intensive care unit (ICU). Little is known about the prevalence of medication prescribing errors in the ICU of hospitals in Ethiopia. The aim of this study was to assess medication prescribing errors in the ICU of Tikur Anbessa Specialized Hospital using retrospective cross-sectional analysis of patient cards and medication charts. About 220 patient charts were reviewed with a total of 1311 patient-days, and 882 prescription episodes. 359 MEs were detected; with prevalence of 40 per 100 orders. Common prescribing errors were omission errors 154 (42.89%), 101 (28.13%) wrong combination, 48 (13.37%) wrong abbreviation, 30 (8.36%) wrong dose, wrong frequency 18 (5.01%) and wrong indications 8 (2.23%). The present study shows that medication errors are common in medical ICU of Tikur Anbessa Specialized Hospital. These results suggest future targets of prevention strategies to reduce the rate of medication error.
Differences among Job Positions Related to Communication Errors at Construction Sites
NASA Astrophysics Data System (ADS)
Takahashi, Akiko; Ishida, Toshiro
In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.
Zhang, De-Long; Zhang, Pei; Zhou, Hao-Jiang; Pun, Edwin Yue-Bun
2008-10-01
We have demonstrated the possibility that near-stoichiometric Ti:LiNbO(3) strip waveguides are fabricated by carrying out vapor transport equilibration at 1060 degrees C for 12 h on a congruent LiNbO(3) substrate with photolithographically patterned 4-8 microm wide, 115 nm thick Ti strips. Optical characterizations show that these waveguides are single mode at 1.5 microm and show a waveguide loss of 1.3 dB/cm for TM mode and 1.1 dB/cm for TE mode. In the width/depth direction of the waveguide, the mode field follows the Gauss/Hermite-Gauss function. Secondary-ion-mass spectrometry (SIMS) was used to study Ti-concentration profiles in the depth direction and on the surface of the 6 microm wide waveguide. The result shows that the Ti profile follows a sum of two error functions along the width direction and a complementary error function in the depth direction. The surface Ti concentration, 1/e width and depth, and mean diffusivities along the width and depth directions of the guide are similar to 3.0 x 10(21) cm(-3), 3.8 microm, 2.6 microm, 0.30 and 0.14 microm(2)/h, respectively. Micro-Raman analysis was carried out on the waveguide endface to characterize the depth profile of Li composition in the guiding layer. The results show that the depth profile of Li composition also follows a complementary error function with a 1/e depth of 3.64 microm. The mean ([Li(Li)]+[Ti(Li)])/([Nb(Nb)]+[Ti(Nb)]) ratio in the waveguide layer is about 0.98. The inhomogeneous Li-composition profile results in a varied substrate index in the guiding layer. A two-dimensional refractive index profile model in the waveguide is proposed by taking into consideration the varied substrate index and assuming linearity between Ti-induced index change and Ti concentration. The net waveguide surface index increments at 1545 nm are 0.0114 and 0.0212 for ordinary and extraordinary rays, respectively. Based upon the constructed index model, the fundamental mode field profile was calculated using the beam propagation method, and the mode sizes and effective index versus the Ti-strip width were calculated for three lower TM and TE modes using the variational method. An agreement between theory and experiment is obtained.
Projections of Southern Hemisphere atmospheric circulation interannual variability
NASA Astrophysics Data System (ADS)
Grainger, Simon; Frederiksen, Carsten S.; Zheng, Xiaogu
2017-02-01
An analysis is made of the coherent patterns, or modes, of interannual variability of Southern Hemisphere 500 hPa geopotential height field under current and projected climate change scenarios. Using three separate multi-model ensembles (MMEs) of coupled model intercomparison project phase 5 (CMIP5) models, the interannual variability of the seasonal mean is separated into components related to (1) intraseasonal processes; (2) slowly-varying internal dynamics; and (3) the slowly-varying response to external changes in radiative forcing. In the CMIP5 RCP8.5 and RCP4.5 experiments, there is very little change in the twenty-first century in the intraseasonal component modes, related to the Southern annular mode (SAM) and mid-latitude wave processes. The leading three slowly-varying internal component modes are related to SAM, the El Niño-Southern oscillation (ENSO), and the South Pacific wave (SPW). Structural changes in the slow-internal SAM and ENSO modes do not exceed a qualitative estimate of the spatial sampling error, but there is a consistent increase in the ENSO-related variance. Changes in the SPW mode exceed the sampling error threshold, but cannot be further attributed. Changes in the dominant slowly-varying external mode are related to projected changes in radiative forcing. They reflect thermal expansion of the tropical troposphere and associated changes in the Hadley Cell circulation. Changes in the externally-forced associated variance in the RCP8.5 experiment are an order of magnitude greater than for the internal components, indicating that the SH seasonal mean circulation will be even more dominated by a SAM-like annular structure. Across the three MMEs, there is convergence in the projected response in the slow-external component.
Engineering evaluations and studies. Report for IUS studies
NASA Technical Reports Server (NTRS)
1981-01-01
The reviews, investigations, and analyses of the Inertial Upper Stage (IUS) Spacecraft Tracking and Data Network (STDN) transponder are reviewed. Carrier lock detector performance for Tracking and Data Relay Satellite System (TDRSS) dual-mode operation is discussed, as is the problem of predicting instantaneous frequency error in the carrier loop. Coastal loop performance analysis is critiqued and the static tracking phase error induced by thermal noise biases is discussed.
Assessing Gaussian Assumption of PMU Measurement Error Using Field Data
Wang, Shaobu; Zhao, Junbo; Huang, Zhenyu; ...
2017-10-13
Gaussian PMU measurement error has been assumed for many power system applications, such as state estimation, oscillatory modes monitoring, voltage stability analysis, to cite a few. This letter proposes a simple yet effective approach to assess this assumption by using the stability property of a probability distribution and the concept of redundant measurement. Extensive results using field PMU data from WECC system reveal that the Gaussian assumption is questionable.
Influence of model errors in optimal sensor placement
NASA Astrophysics Data System (ADS)
Vincenzi, Loris; Simonini, Laura
2017-02-01
The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.
Field-Line Localized Destabilization of Ballooning Modes in Three-Dimensional Tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willensdorfer, M.; Cote, T. B.; Hegna, C. C.
2017-08-25
Field-line localized ballooning modes have been observed at the edge of high confinement mode plasmas in ASDEX Upgrade with rotating 3D perturbations induced by an externally applied n ¼ 2 error field and during a moderate level of edge localized mode mitigation. The observed ballooning modes are localized to the field lines which experience one of the two zero crossings of the radial flux surface displacement during one rotation period. The localization of the ballooning modes agrees very well with the localization of the largest growth rates from infinite-n ideal ballooning stability calculations using a realistic 3D ideal magnetohydrodynamic equilibrium.more » This analysis predicts a lower stability with respect to the axisymmetric case. The primary mechanism for the local lower stability is the 3D distortion of the local magnetic shear.« less
Robust manipulation of light using topologically protected plasmonic modes.
Liu, Chenxu; Gurudev Dutt, M V; Pekker, David
2018-02-05
We propose using a topological plasmonic crystal structure composed of an array of nearly parallel nanowires with unequal spacing for manipulating light. In the paraxial approximation, the Helmholtz equation that describes the propagation of light along the nanowires maps onto the Schrödinger equation of the Su-Schrieffer-Heeger (SSH) model. Using a full three-dimensional finite difference time domain solution of the Maxwell equations, we verify the existence of topological defect modes, with sub-wavelength localization, bound to domain walls of the plasmonic crystal. We show that by manipulating domain walls we can construct spatial mode filters that couple bulk modes to topological defect modes, and topological beam-splitters that couple two topological defect modes. Finally, we show that the structures are tolerant to fabrication errors with an inverse length-scale smaller than the topological band gap.
Zhou, Ting; Jia, Hao; Ding, Jianfeng; Zhang, Lei; Fu, Xin; Yang, Lin
2018-04-02
We present a silicon thermo-optic 2☓2 four-mode optical switch optimized for optical space switching plus local optical mode switching. Four asymmetric directional couplers are utilized for mode multiplexing and de-multiplexing. Sixteen 2☓2 single-mode optical switches based on balanced thermally tunable Mach-Zehnder interferometers are exploited for switching function. The measured insertion losses are 8.0~12.2 dB and the optical signal-to-noise ratios are larger than 11.2 dB in the wavelength range of 1525~1565 nm. The optical links in "all-bar" and "all-cross" states exhibit less than 2.0 dB and 1.4 dB power penalties respectively below 10 -9 bit error rates for 40 Gbps data transmission.
Teaching Common Errors in Applying a Procedure.
ERIC Educational Resources Information Center
Marcone, Stephen; Reigeluth, Charles M.
1988-01-01
Discusses study that investigated whether or not the teaching of matched examples and nonexamples in the form of common errors could improve student performance in undergraduate music theory courses. Highlights include hypotheses tested, pretests and posttests, and suggestions for further research with different age groups. (19 references)…
Ciaccio, Edward J; Micheli-Tzanakou, Evangelia
2007-07-01
Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.
At the cross-roads: an on-road examination of driving errors at intersections.
Young, Kristie L; Salmon, Paul M; Lenné, Michael G
2013-09-01
A significant proportion of road trauma occurs at intersections. Understanding the nature of driving errors at intersections therefore has the potential to lead to significant injury reductions. To further understand how the complexity of modern intersections shapes behaviour of these errors are compared to errors made mid-block, and the role of wider systems failures in intersection error causation is investigated in an on-road study. Twenty-five participants drove a pre-determined urban route incorporating 25 intersections. Two in-vehicle observers recorded the errors made while a range of other data was collected, including driver verbal protocols, video, driver eye glance behaviour and vehicle data (e.g., speed, braking and lane position). Participants also completed a post-trial cognitive task analysis interview. Participants were found to make 39 specific error types, with speeding violations the most common. Participants made significantly more errors at intersections compared to mid-block, with misjudgement, action and perceptual/observation errors more commonly observed at intersections. Traffic signal configuration was found to play a key role in intersection error causation, with drivers making more errors at partially signalised compared to fully signalised intersections. Copyright © 2012 Elsevier Ltd. All rights reserved.
[Diagnostic Errors in Medicine].
Buser, Claudia; Bankova, Andriyana
2015-12-09
The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.
Effect of satellite formations and imaging modes on global albedo estimation
NASA Astrophysics Data System (ADS)
Nag, Sreeja; Gatebe, Charles K.; Miller, David W.; de Weck, Olivier L.
2016-05-01
We confirm the applicability of using small satellite formation flight for multi-angular earth observation to retrieve global, narrow band, narrow field-of-view albedo. The value of formation flight is assessed using a coupled systems engineering and science evaluation model, driven by Model Based Systems Engineering and Observing System Simulation Experiments. Albedo errors are calculated against bi-directional reflectance data obtained from NASA airborne campaigns made by the Cloud Absorption Radiometer for the seven major surface types, binned using MODIS' land cover map - water, forest, cropland, grassland, snow, desert and cities. A full tradespace of architectures with three to eight satellites, maintainable orbits and imaging modes (collective payload pointing strategies) are assessed. For an arbitrary 4-sat formation, changing the reference, nadir-pointing satellite dynamically reduces the average albedo error to 0.003, from 0.006 found in the static referencecase. Tracking pre-selected waypoints with all the satellites reduces the average error further to 0.001, allows better polar imaging and continued operations even with a broken formation. An albedo error of 0.001 translates to 1.36 W/m2 or 0.4% in Earth's outgoing radiation error. Estimation errors are found to be independent of the satellites' altitude and inclination, if the nadir-looking is changed dynamically. The formation satellites are restricted to differ in only right ascension of planes and mean anomalies within slotted bounds. Three satellites in some specific formations show average albedo errors of less than 2% with respect to airborne, ground data and seven satellites in any slotted formation outperform the monolithic error of 3.6%. In fact, the maximum possible albedo error, purely based on angular sampling, of 12% for monoliths is outperformed by a five-satellite formation in any slotted arrangement and an eight satellite formation can bring that error down four fold to 3%. More than 70% ground spot overlap between the satellites is possible with 0.5° of pointing accuracy, 2 Km of GPS accuracy and commands uplinked once a day. The formations can be maintained at less than 1 m/s of monthly ΔV per satellite.
Overview of Initial NSTX-U Experimental Operations
NASA Astrophysics Data System (ADS)
Battaglia, Devon; the NSTX-U Team
2016-10-01
Initial operation of the National Spherical Torus Experiment Upgrade (NSTX-U) has satisfied a number of commissioning milestones, including demonstration of discharges that exceed the field and pulse length of NSTX. ELMy H-mode operation at the no-wall βN limit is obtained with Boronized wall conditioning. Peak H-mode parameters include: Ip = 1 MA, BT0 = 0.63 T, WMHD = 330 kJ, βN = 4, βN/li = 6, κ = 2.3, τE , tot >50 ms. Access to high-performance H-mode scenarios with long MHD-quiescent periods is enabled by the resilient timing of the L-H transition via feedback control of the diverting time and shape, and correction of the dominant n =1 error fields during the Ip ramp. Stationary L-mode discharges have been realized up to 1 MA with 2 s discharges achieved at Ip = 650 kA. The long-pulse L-mode discharges enabled by the new central solenoid supported initial experiments on error field measurements and correction, plasma shape control, controlled discharge ramp-down, L-mode transport and fast ion physics. Increased off-axis current drive and reduction of fast ion instabilities has been observed with the new, more tangential neutral beamline. The initial results support that access to increased field, current and heating at low-aspect-ratio expands the regimes available to develop scenarios, diagnostics and predictive models that inform the design and optimization of future burning plasma tokamak devices, including ITER. Work Supported by U.S. DOE Contract No. DE-AC02-09CH11466.
Investigating System Dependability Modeling Using AADL
NASA Technical Reports Server (NTRS)
Hall, Brendan; Driscoll, Kevin R.; Madl, Gabor
2013-01-01
This report describes Architecture Analysis & Design Language (AADL) models for a diverse set of fault-tolerant, embedded data networks and describes the methods and tools used to created these models. It also includes error models per the AADL Error Annex. Some networks were modeled using Error Detection Isolation Containment Types (EDICT). This report gives a brief description for each of the networks, a description of its modeling, the model itself, and evaluations of the tools used for creating the models. The methodology includes a naming convention that supports a systematic way to enumerate all of the potential failure modes.
An Alternate Method for Estimating Dynamic Height from XBT Profiles Using Empirical Vertical Modes
NASA Technical Reports Server (NTRS)
Lagerloef, Gary S. E.
1994-01-01
A technique is presented that applies modal decomposition to estimate dynamic height (0-450 db) from Expendable BathyThermograph (XBT) temperature profiles. Salinity-Temperature-Depth (STD) data are used to establish empirical relationships between vertically integrated temperature profiles and empirical dynamic height modes. These are then applied to XBT data to estimate dynamic height. A standard error of 0.028 dynamic meters is obtained for the waters of the Gulf of Alaska- an ocean region subject to substantial freshwater buoyancy forcing and with a T-S relationship that has considerable scatter. The residual error is a substantial improvement relative to the conventional T-S correlation technique when applied to this region. Systematic errors between estimated and true dynamic height were evaluated. The 20-year-long time series at Ocean Station P (50 deg N, 145 deg W) indicated weak variations in the error interannually, but not seasonally. There were no evident systematic alongshore variations in the error in the ocean boundary current regime near the perimeter of the Alaska gyre. The results prove satisfactory for the purpose of this work, which is to generate dynamic height from XBT data for coanalysis with satellite altimeter data, given that the altimeter height precision is likewise on the order of 2-3 cm. While the technique has not been applied to other ocean regions where the T-S relation has less scatter, it is suggested that it could provide some improvement over previously applied methods, as well.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1978-01-01
Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.
Leitner, Jordan B.; Duran-Jordan, Kelly; Magerman, Adam B.; Schmader, Toni; Allen, John J. B.
2015-01-01
This study assessed whether individual differences in self-oriented neural processing were associated with performance perceptions of minority students under stereotype threat. Resting electroencephalographic activity recorded in white and minority participants was used to predict later estimates of task errors and self-doubt on a presumed measure of intelligence. We assessed spontaneous phase-locking between dipole sources in left lateral parietal cortex (LPC), precuneus/posterior cingulate cortex (P/PCC), and medial prefrontal cortex (MPFC); three regions of the default mode network (DMN) that are integral for self-oriented processing. Results revealed that minorities with greater LPC-P/PCC phase-locking in the theta band reported more accurate error estimations. All individuals experienced less self-doubt to the extent they exhibited greater LPC-MPFC phase-locking in the alpha band but this effect was driven by minorities. Minorities also reported more self-doubt to the extent they overestimated errors. Findings reveal novel neural moderators of stereotype threat effects on subjective experience. Spontaneous synchronization between DMN regions may play a role in anticipatory coping mechanisms that buffer individuals from stereotype threat. PMID:25398433
Wang, Yunyun; Liu, Ye; Deng, Xinli; Cong, Yulong; Jiang, Xingyu
2016-12-15
Although conventional enzyme-linked immunosorbent assays (ELISA) and related assays have been widely applied for the diagnosis of diseases, many of them suffer from large error variance for monitoring the concentration of targets over time, and insufficient limit of detection (LOD) for assaying dilute targets. We herein report a readout mode of ELISA based on the binding between peptidic β-sheet structure and Congo Red. The formation of peptidic β-sheet structure is triggered by alkaline phosphatase (ALP). For the detection of P-Selectin which is a crucial indicator for evaluating thrombus diseases in clinic, the 'β-sheet and Congo Red' mode significantly decreases both the error variance and the LOD (from 9.7ng/ml to 1.1 ng/ml) of detection, compared with commercial ELISA (an existing gold-standard method for detecting P-Selectin in clinic). Considering the wide range of ALP-based antibodies for immunoassays, such novel method could be applicable to the analysis of many types of targets. Copyright © 2016 Elsevier B.V. All rights reserved.
A forward error correction technique using a high-speed, high-rate single chip codec
NASA Astrophysics Data System (ADS)
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
Prevalence of teen driver errors leading to serious motor vehicle crashes.
Curry, Allison E; Hafetz, Jessica; Kallan, Michael J; Winston, Flaura K; Durbin, Dennis R
2011-07-01
Motor vehicle crashes are the leading cause of adolescent deaths. Programs and policies should target the most common and modifiable reasons for crashes. We estimated the frequency of critical reasons for crashes involving teen drivers, and examined in more depth specific teen driver errors. The National Highway Traffic Safety Administration's (NHTSA) National Motor Vehicle Crash Causation Survey collected data at the scene of a nationally representative sample of 5470 serious crashes between 7/05 and 12/07. NHTSA researchers assigned a single driver, vehicle, or environmental factor as the critical reason for the event immediately leading to each crash. We analyzed crashes involving 15-18 year old drivers. 822 teen drivers were involved in 795 serious crashes, representing 335,667 teens in 325,291 crashes. Driver error was by far the most common reason for crashes (95.6%), as opposed to vehicle or environmental factors. Among crashes with a driver error, a teen made the error 79.3% of the time (75.8% of all teen-involved crashes). Recognition errors (e.g., inadequate surveillance, distraction) accounted for 46.3% of all teen errors, followed by decision errors (e.g., following too closely, too fast for conditions) (40.1%) and performance errors (e.g., loss of control) (8.0%). Inadequate surveillance, driving too fast for conditions, and distracted driving together accounted for almost half of all crashes. Aggressive driving behavior, drowsy driving, and physical impairments were less commonly cited as critical reasons. Males and females had similar proportions of broadly classified errors, although females were specifically more likely to make inadequate surveillance errors. Our findings support prioritization of interventions targeting driver distraction and surveillance and hazard awareness training. Copyright © 2010 Elsevier Ltd. All rights reserved.
Faught, Jacqueline Tonigan; Balter, Peter A; Johnson, Jennifer L; Kry, Stephen F; Court, Laurence E; Stingo, Francesco C; Followill, David S
2017-11-01
The objective of this work was to assess both the perception of failure modes in Intensity Modulated Radiation Therapy (IMRT) when the linac is operated at the edge of tolerances given in AAPM TG-40 (Kutcher et al.) and TG-142 (Klein et al.) as well as the application of FMEA to this specific section of the IMRT process. An online survey was distributed to approximately 2000 physicists worldwide that participate in quality services provided by the Imaging and Radiation Oncology Core - Houston (IROC-H). The survey briefly described eleven different failure modes covered by basic quality assurance in step-and-shoot IMRT at or near TG-40 (Kutcher et al.) and TG-142 (Klein et al.) tolerance criteria levels. Respondents were asked to estimate the worst case scenario percent dose error that could be caused by each of these failure modes in a head and neck patient as well as the FMEA scores: Occurrence, Detectability, and Severity. Risk probability number (RPN) scores were calculated as the product of these scores. Demographic data were also collected. A total of 181 individual and three group responses were submitted. 84% were from North America. Most (76%) individual respondents performed at least 80% clinical work and 92% were nationally certified. Respondent medical physics experience ranged from 2.5 to 45 yr (average 18 yr). A total of 52% of individual respondents were at least somewhat familiar with FMEA, while 17% were not familiar. Several IMRT techniques, treatment planning systems, and linear accelerator manufacturers were represented. All failure modes received widely varying scores ranging from 1 to 10 for occurrence, at least 1-9 for detectability, and at least 1-7 for severity. Ranking failure modes by RPN scores also resulted in large variability, with each failure mode being ranked both most risky (1st) and least risky (11th) by different respondents. On average MLC modeling had the highest RPN scores. Individual estimated percent dose errors and severity scores positively correlated (P < 0.01) for each FM as expected. No universal correlations were found between the demographic information collected and scoring, percent dose errors or ranking. Failure modes investigated overall were evaluated as low to medium risk, with average RPNs less than 110. The ranking of 11 failure modes was not agreed upon by the community. Large variability in FMEA scoring may be caused by individual interpretation and/or experience, reflecting the subjective nature of the FMEA tool. © 2017 American Association of Physicists in Medicine.
Out-of-This-World Calculations
ERIC Educational Resources Information Center
Kalb, Kristina S.; Gravett, Julie M.
2012-01-01
By following learned rules rather than reasoning, students often fall into common error patterns, something every experienced teacher has observed in the classroom. In their effort to circumvent the developing common error patterns of their students, the authors decided to supplement their math text with two weeklong investigations. The first was…
Ten common errors beginning substance abuse workers make in group treatment.
Greif, G L
1996-01-01
Beginning therapists sometimes make mistakes when working with substance abusers in groups. This article discusses ten common errors that the author has observed. Five center on the therapist's approach and five center on the nuts and bolts of group leadership. Suggestions are offered for how to avoid them.
NASA Technical Reports Server (NTRS)
May, Brian D.
1992-01-01
The experimental NASA satellite, Advanced Communications Technology Satellite (ACTS), introduces new technology for high throughput 30 to 20 GHz satellite services. Contained in a single communication payload is both a regenerative TDMA system and multiple 800 MHz 'bent pipe' channels routed to spot beams by a switch matrix. While only one mode of operation is typical during any experiment, both modes can operate simultaneously with reduced capability due to sharing of the transponder. NASA-Lewis instituted a ground terminal development program in anticipation of the satellite launch to verify the performance of the switch matrix mode of operations. Specific functions are built into the ground terminal to evaluate rain fade compensation with uplink power control and to monitor satellite transponder performance with bit error rate measurements. These functions were the genesis of the ground terminal's name, Link Evaluation Terminal, often referred to as LET. Connectors are included in LET that allow independent experimenters to run unique modulation or network experiments through ACTS using only the RF transmit and receive portions of LET. Test data indicate that LET will be able to verify important parts of ACTS technology and provide independent experimenters with a useful ground terminal. Lab measurements of major subsystems integrated into LET are presented. Bit error rate is measured with LET in an internal loopback mode.
NASA Astrophysics Data System (ADS)
Miao, Qin; Rahn, J. Richard; Tourovskaia, Anna; Meyer, Michael G.; Neumann, Thomas; Nelson, Alan C.; Seibel, Eric J.
2009-11-01
The practice of clinical cytology relies on bright-field microscopy using absorption dyes like hematoxylin and eosin in the transmission mode, while the practice of research microscopy relies on fluorescence microscopy in the epi-illumination mode. The optical projection tomography microscope is an optical microscope that can generate 3-D images of single cells with isometric high resolution both in absorption and fluorescence mode. Although the depth of field of the microscope objective is in the submicron range, it can be extended by scanning the objective's focal plane. The extended depth of field image is similar to a projection in a conventional x-ray computed tomography. Cells suspended in optical gel flow through a custom-designed microcapillary. Multiple pseudoprojection images are taken by rotating the microcapillary. After these pseudoprojection images are further aligned, computed tomography methods are applied to create 3-D reconstruction. 3-D reconstructed images of single cells are shown in both absorption and fluorescence mode. Fluorescence spatial resolution is measured at 0.35 μm in both axial and lateral dimensions. Since fluorescence and absorption images are taken in two different rotations, mechanical error may cause misalignment of 3-D images. This mechanical error is estimated to be within the resolution of the system.
Aerosol Extinction Profile Mapping with Lognormal Distribution Based on MPL Data
NASA Astrophysics Data System (ADS)
Lin, T. H.; Lee, T. T.; Chang, K. E.; Lien, W. H.; Liu, G. R.; Liu, C. Y.
2017-12-01
This study intends to challenge the profile mapping of aerosol vertical distribution by mathematical function. With the similarity in distribution pattern, lognormal distribution is examined for mapping the aerosol extinction profile based on MPL (Micro Pulse LiDAR) in situ measurements. The variables of lognormal distribution are log mean (μ) and log standard deviation (σ), which will be correlated with the parameters of aerosol optical depht (AOD) and planetary boundary layer height (PBLH) associated with the altitude of extinction peak (Mode) defined in this study. On the base of 10 years MPL data with single peak, the mapping results showed that the mean error of Mode and σ retrievals are 16.1% and 25.3%, respectively. The mean error of σ retrieval can be reduced to 16.5% under the cases of larger distance between PBLH and Mode. The proposed method is further applied to MODIS AOD product in mapping extinction profile for the retrieval of PM2.5 in terms of satellite observations. The results indicated well agreement between retrievals and ground measurements when aerosols under 525 meters are well-mixed. The feasibility of proposed method to satellite remote sensing is also suggested by the case study. Keyword: Aerosol extinction profile, Lognormal distribution, MPL, Planetary boundary layer height (PBLH), Aerosol optical depth (AOD), Mode
Model validity and frequency band selection in operational modal analysis
NASA Astrophysics Data System (ADS)
Au, Siu-Kui
2016-12-01
Experimental modal analysis aims at identifying the modal properties (e.g., natural frequencies, damping ratios, mode shapes) of a structure using vibration measurements. Two basic questions are encountered when operating in the frequency domain: Is there a mode near a particular frequency? If so, how much spectral data near the frequency can be included for modal identification without incurring significant modeling error? For data with high signal-to-noise (s/n) ratios these questions can be addressed using empirical tools such as singular value spectrum. Otherwise they are generally open and can be challenging, e.g., for modes with low s/n ratios or close modes. In this work these questions are addressed using a Bayesian approach. The focus is on operational modal analysis, i.e., with 'output-only' ambient data, where identification uncertainty and modeling error can be significant and their control is most demanding. The approach leads to 'evidence ratios' quantifying the relative plausibility of competing sets of modeling assumptions. The latter involves modeling the 'what-if-not' situation, which is non-trivial but is resolved by systematic consideration of alternative models and using maximum entropy principle. Synthetic and field data are considered to investigate the behavior of evidence ratios and how they should be interpreted in practical applications.
Systematic Errors in an Air Track Experiment.
ERIC Educational Resources Information Center
Ramirez, Santos A.; Ham, Joe S.
1990-01-01
Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)
NASA Astrophysics Data System (ADS)
Zhang, Yonggao; Gao, Yanli; Long, Lizhong
2012-04-01
More and more researchers have great concern on the issue of Common-mode voltage (CMV) in high voltage large power converter. A novel common-mode voltage suppression scheme based on zero-vector PWM strategy (ZVPWM) is present in this paper. Taking a diode-clamped five-level converter as example, the principle of zero vector PWM common-mode voltage (ZCMVPWM) suppression method is studied in detail. ZCMVPWM suppression strategy is including four important parts, which are locating the sector of reference voltage vector, locating the small triangular sub-sector of reference voltage vector, reference vector synthesis, and calculating the operating time of vector. The principles of four important pars are illustrated in detail and the corresponding MATLAB models are established. System simulation and experimental results are provided. It gives some consultation value for the development and research of multi-level converters.
NASA Technical Reports Server (NTRS)
Hahn, Edward C.; Hansman, R. J., Jr.
1992-01-01
An experiment to study how automation, when used in conjunction with datalink for the delivery of ATC clearance amendments, affects the situational awareness of aircrews was conducted. The study was focused on the relationship of situational awareness to automated Flight Management System (FMS) programming of datalinked clearances and the readback of ATC clearances. Situational awareness was tested by issuing nominally unacceptable ATC clearances and measuring whether the error was detected by the subject pilots. The experiment also varied the mode of clearance delivery: Verbal, Textual, and Graphical. The error detection performance and pilot preference results indicate that the automated programming of the FMS may be superior to manual programming. It is believed that automated FMS programming may relieve some of the cognitive load, allowing pilots to concentrate on the strategic implications of a clearance amendment. Also, readback appears to have value, but the small sample size precludes a definite conclusion. Furthermore, because textual and graphical modes of delivery offer different but complementary advantages for cognitive processing, a combination of these modes of delivery may be advantageous in a datalink presentation.
Finite-time containment control of perturbed multi-agent systems based on sliding-mode control
NASA Astrophysics Data System (ADS)
Yu, Di; Ji, Xiang Yang
2018-01-01
Aimed at faster convergence rate, this paper investigates finite-time containment control problem for second-order multi-agent systems with norm-bounded non-linear perturbation. When topology between the followers are strongly connected, the nonsingular fast terminal sliding-mode error is defined, corresponding discontinuous control protocol is designed and the appropriate value range of control parameter is obtained by applying finite-time stability analysis, so that the followers converge to and move along the desired trajectories within the convex hull formed by the leaders in finite time. Furthermore, on the basis of the sliding-mode error defined, the corresponding distributed continuous control protocols are investigated with fast exponential reaching law and double exponential reaching law, so as to make the followers move to the small neighbourhoods of their desired locations and keep within the dynamic convex hull formed by the leaders in finite time to achieve practical finite-time containment control. Meanwhile, we develop the faster control scheme according to comparison of the convergence rate of these two different reaching laws. Simulation examples are given to verify the correctness of theoretical results.
NASA Technical Reports Server (NTRS)
Hahn, Edward C.; Hansman, R. John, Jr.
1992-01-01
An experiment to study how automation, when used in conjunction with datalink for the delivery of air traffic control (ATC) clearance amendments, affects the situational awareness of aircrews was conducted. The study was focused on the relationship of situational awareness to automated Flight Management System (FMS) programming and the readback of ATC clearances. Situational awareness was tested by issuing nominally unacceptable ATC clearances and measuring whether the error was detected by the subject pilots. The experiment also varied the mode of clearance delivery: Verbal, Textual, and Graphical. The error detection performance and pilot preference results indicate that the automated programming of the FMS may be superior to manual programming. It is believed that automated FMS programming may relieve some of the cognitive load, allowing pilots to concentrate on the strategic implications of a clearance amendment. Also, readback appears to have value, but the small sample size precludes a definite conclusion. Furthermore, because textual and graphical modes of delivery offer different but complementary advantages for cognitive processing, a combination of these modes of delivery may be advantageous in a datalink presentation.
NASA Astrophysics Data System (ADS)
Lin, Tsung-Chih
2010-12-01
In this paper, a novel direct adaptive interval type-2 fuzzy-neural tracking control equipped with sliding mode and Lyapunov synthesis approach is proposed to handle the training data corrupted by noise or rule uncertainties for nonlinear SISO nonlinear systems involving external disturbances. By employing adaptive fuzzy-neural control theory, the update laws will be derived for approximating the uncertain nonlinear dynamical system. In the meantime, the sliding mode control method and the Lyapunov stability criterion are incorporated into the adaptive fuzzy-neural control scheme such that the derived controller is robust with respect to unmodeled dynamics, external disturbance and approximation errors. In comparison with conventional methods, the advocated approach not only guarantees closed-loop stability but also the output tracking error of the overall system will converge to zero asymptotically without prior knowledge on the upper bound of the lumped uncertainty. Furthermore, chattering effect of the control input will be substantially reduced by the proposed technique. To illustrate the performance of the proposed method, finally simulation example will be given.
Single-Event Effect Performance of a Conductive-Bridge Memory EEPROM
NASA Technical Reports Server (NTRS)
Chen, Dakai; Wilcox, Edward; Berg, Melanie; Kim, Hak; Phan, Anthony; Figueiredo, Marco; Seidleck, Christina; LaBel, Kenneth
2015-01-01
We investigated the heavy ion single-event effect (SEE) susceptibility of the industry’s first stand-alone memory based on conductive-bridge memory (CBRAM) technology. The device is available as an electrically erasable programmable read-only memory (EEPROM). We found that single-event functional interrupt (SEFI) is the dominant SEE type for each operational mode (standby, dynamic read, and dynamic write/read). SEFIs occurred even while the device is statically biased in standby mode. Worst case SEFIs resulted in errors that filled the entire memory space. Power cycle did not always clear the errors. Thus the corrupted cells had to be reprogrammed in some cases. The device is also vulnerable to bit upsets during dynamic write/read tests, although the frequency of the upsets are relatively low. The linear energy transfer threshold for cell upset is between 10 and 20 megaelectron volts per square centimeter per milligram, with an upper limit cross section of 1.6 times 10(sup -11) square centimeters per bit (95 percent confidence level) at 10 megaelectronvolts per square centimeter per milligram. In standby mode, the CBRAM array appears invulnerable to bit upsets.
NASA Technical Reports Server (NTRS)
Rosch, E.
1975-01-01
The task of time estimation, an activity occasionally performed by pilots during actual flight, was investigated with the objective of providing human factors investigators with an unobtrusive and minimally loading additional task that is sensitive to differences in flying conditions and flight instrumentation associated with the main task of piloting an aircraft simulator. Previous research indicated that the duration and consistency of time estimates is associated with the cognitive, perceptual, and motor loads imposed by concurrent simple tasks. The relationships between the length and variability of time estimates and concurrent task variables under a more complex situation involving simulated flight were clarified. The wrap-around effect with respect to baseline duration, a consequence of mode switching at intermediate levels of concurrent task distraction, should contribute substantially to estimate variability and have a complex effect on the shape of the resulting distribution of estimates.
Offset-free rail-to-rail derandomizing peak detect-and-hold circuit
DeGeronimo, Gianluigi; O'Connor, Paul; Kandasamy, Anand
2003-01-01
A peak detect-and-hold circuit eliminates errors introduced by conventional amplifiers, such as common-mode rejection and input voltage offset. The circuit includes an amplifier, three switches, a transistor, and a capacitor. During a detect-and-hold phase, a hold voltage at a non-inverting in put terminal of the amplifier tracks an input voltage signal and when a peak is reached, the transistor is switched off, thereby storing a peak voltage in the capacitor. During a readout phase, the circuit functions as a unity gain buffer, in which the voltage stored in the capacitor is provided as an output voltage. The circuit is able to sense signals rail-to-rail and can readily be modified to sense positive, negative, or peak-to-peak voltages. Derandomization may be achieved by using a plurality of peak detect-and-hold circuits electrically connected in parallel.
Lipid Informed Quantitation and Identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kevin Crowell, PNNL
2014-07-21
LIQUID (Lipid Informed Quantitation and Identification) is a software program that has been developed to enable users to conduct both informed and high-throughput global liquid chromatography-tandem mass spectrometry (LC-MS/MS)-based lipidomics analysis. This newly designed desktop application can quickly identify and quantify lipids from LC-MS/MS datasets while providing a friendly graphical user interface for users to fully explore the data. Informed data analysis simply involves the user specifying an electrospray ionization mode, lipid common name (i.e. PE(16:0/18:2)), and associated charge carrier. A stemplot of the isotopic profile and a line plot of the extracted ion chromatogram are also provided to showmore » the MS-level evidence of the identified lipid. In addition to plots, other information such as intensity, mass measurement error, and elution time are also provided. Typically, a global analysis for 15,000 lipid targets« less
Simultaneous classical communication and quantum key distribution using continuous variables*
NASA Astrophysics Data System (ADS)
Qi, Bing
2016-10-01
Presently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters show that both deterministic classical communication with a bit error rate of 10-9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.
Lane, Sandi J; Troyer, Jennifer L; Dienemann, Jacqueline A; Laditka, Sarah B; Blanchette, Christopher M
2014-01-01
Older adults are at greatest risk of medication errors during the transition period of the first 7 days after admission and readmission to a skilled nursing facility (SNF). The aim of this study was to evaluate structure- and process-related factors that contribute to medication errors and harm during transition periods at a SNF. Data for medication errors and potential medication errors during the 7-day transition period for residents entering North Carolina SNFs were from the Medication Error Quality Initiative-Individual Error database from October 2006 to September 2007. The impact of SNF structure and process measures on the number of reported medication errors and harm from errors were examined using bivariate and multivariate model methods. A total of 138 SNFs reported 581 transition period medication errors; 73 (12.6%) caused harm. Chain affiliation was associated with a reduction in the volume of errors during the transition period. One third of all reported transition errors occurred during the medication administration phase of the medication use process, where dose omissions were the most common type of error; however, dose omissions caused harm less often than wrong-dose errors did. Prescribing errors were much less common than administration errors but were much more likely to cause harm. Both structure and process measures of quality were related to the volume of medication errors.However, process quality measures may play a more important role in predicting harm from errors during the transition of a resident into an SNF. Medication errors during transition could be reduced by improving both prescribing processes and transcription and documentation of orders.
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.
Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A; Wetzstein, Gordon
2017-02-28
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
Corlett, P.R.; Canavan, S.V.; Nahum, L.; Appah, F.; Morgan, P.T.
2014-01-01
Introduction. Dreams might represent a window on altered states of consciousness with relevance to psychotic experiences, where reality monitoring is impaired. We examined reality monitoring in healthy, non-psychotic individuals with varying degrees of dream awareness using a task designed to assess confabulatory memory errors – a confusion regarding reality whereby information from the past feels falsely familiar and does not constrain current perception appropriately. Confabulatory errors are common following damage to the ventromedial prefrontal cortex (vmPFC). Ventromedial function has previously been implicated in dreaming and dream awareness. Methods. In a hospital research setting, physically and mentally healthy individuals with high (n = 18) and low (n = 13) self-reported dream awareness completed a computerised cognitive task that involved reality monitoring based on familiarity across a series of task runs. Results. Signal detection theory analysis revealed a more liberal acceptance bias in those with high dream awareness, consistent with the notion of overlap in the perception of dreams, imagination and reality. Conclusions. We discuss the implications of these results for models of reality monitoring and psychosis with a particular focus on the role of vmPFC in default-mode brain function, model-based reinforcement learning and the phenomenology of dreaming and waking consciousness. PMID:25028078
Double power series method for approximating cosmological perturbations
NASA Astrophysics Data System (ADS)
Wren, Andrew J.; Malik, Karim A.
2017-04-01
We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.
An IMM-Aided ZUPT Methodology for an INS/DVL Integrated Navigation System
Yao, Yiqing
2017-01-01
Inertial navigation system (INS)/Doppler velocity log (DVL) integration is the most common navigation solution for underwater vehicles. Due to the complex underwater environment, the velocity information provided by DVL always contains some errors. To improve navigation accuracy, zero velocity update (ZUPT) technology is considered, which is an effective algorithm for land vehicles to mitigate the navigation error during the pure INS mode. However, in contrast to ground vehicles, the ZUPT solution cannot be used directly for underwater vehicles because of the existence of the water current. In order to leverage the strengths of the ZUPT method and the INS/DVL solution, an interactive multiple model (IMM)-aided ZUPT methodology for the INS/DVL-integrated underwater navigation system is proposed. Both the INS/DVL and INS/ZUPT models are constructed and operated in parallel, with weights calculated according to their innovations and innovation covariance matrices. Simulations are conducted to evaluate the proposed algorithm. The results indicate that the IMM-aided ZUPT solution outperforms both the INS/DVL solution and the INS/ZUPT solution in the underwater environment, which can properly distinguish between the ZUPT and non-ZUPT conditions. In addition, during DVL outage, the effectiveness of the proposed algorithm is also verified. PMID:28872602
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays
NASA Astrophysics Data System (ADS)
Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A.; Wetzstein, Gordon
2017-02-01
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
Graphene Ambipolar Nanoelectronics for High Noise Rejection Amplification.
Liu, Che-Hung; Chen, Qi; Liu, Chang-Hua; Zhong, Zhaohui
2016-02-10
In a modern wireless communication system, signal amplification is critical for overcoming losses during multiple data transformations/processes and long-distance transmission. Common mode and differential mode are two fundamental amplification mechanisms, and they utilize totally different circuit configurations. In this paper, we report a new type of dual-gate graphene ambipolar device with capability of operating under both common and differential modes to realize signal amplification. The signal goes through two stages of modulation where the phase of signal can be individually modulated to be either in-phase or out-of-phase at two stages by exploiting the ambipolarity of graphene. As a result, both common and differential mode amplifications can be achieved within one single device, which is not possible in the conventional circuit configuration. In addition, a common-mode rejection ratio as high as 80 dB can be achieved, making it possible for low noise circuit application. These results open up new directions of graphene-based ambipolar electronics that greatly simplify the RF circuit complexity and the design of multifunction device operation.
Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.
2002-01-01
Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.
Black hole spectroscopy: Systematic errors and ringdown energy estimates
NASA Astrophysics Data System (ADS)
Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav
2018-02-01
The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.
Active control of fan-generated plane wave noise
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.; Nuckolls, William E.; Santamaria, Odillyn L.; Martinson, Scott D.
1993-01-01
Subsonic propulsion systems for future aircraft may incorporate ultra-high bypass ratio ducted fan engines whose dominant noise source is the fan with blade passage frequency less than 1000 Hz. This low frequency combines with the requirement of a short nacelle to diminish the effectiveness of passive duct liners. Active noise control is seen as a viable method to augment the conventional passive treatments. An experiment to control ducted fan noise using a time domain active adaptive system is reported. The control sound source consists of loudspeakers arrayed around the fan duct. The error sensor location is in the fan duct. The purpose of this experiment is to demonstrate that the in-duct error sensor reduces the mode spillover in the far field, thereby increasing the efficiency of the control system. In this first series of tests, the fan is configured so that predominantly zero order circumferential waves are generated. The control system is found to reduce the blade passage frequency tone significantly in the acoustic far field when the mode orders of the noise source and of the control source are the same. The noise reduction is not as great when the mode orders are not the same even though the noise source modes are evanescent, but the control system converges stably and global noise reduction is demonstrated in the far field. Further experimentation is planned in which the performance of the system will be evaluated when higher order radial and spinning modes are generated.
Wang, Huai-Yung; Chi, Yu-Chieh; Lin, Gong-Ru
2016-08-08
A novel millimeter-wave radio over fiber (MMW-RoF) link at carrier frequency of 35-GHz is proposed with the use of remotely beating MMW generation from reference master and injected slave colorless laser diode (LD) carriers at orthogonally polarized dual-wavelength injection-locking. The slave colorless LD supports lasing one of the dual-wavelength master modes with orthogonal polarizations, which facilitates the single-mode direct modulation of the quadrature amplitude modulation (QAM) orthogonal frequency division multiplexing (OFDM) data. Such an injected single-carrier encoding and coupled dual-carrier transmission with orthogonal polarization effectively suppresses the cross-heterodyne mode-beating intensity noise, the nonlinear modulation (NLM) and four-wave mixing (FWM) sidemodes during injection locking and fiber transmission. In 25-km single-mode fiber (SMF) based wireline system, the dual-carrier under single-mode encoding provides baseband 24-Gbit/s 64-QAM OFDM transmission with an error vector magnitude (EVM) of 8.8%, a bit error rate (BER) of 3.7 × 10-3, a power penalty of <1.5 dB. After remotely self-beating for wireless transmission, the beat MMW carrier at 35 GHz can deliver the passband 16-QAM OFDM at 4 Gbit/s to show corresponding EVM and BER of 15.5% and 1.4 × 10-3, respectively, after 25-km SMF and 1.6-m free-space transmission.
Low-common-mode differential amplifier
NASA Technical Reports Server (NTRS)
Morrison, S.
1980-01-01
Outputs of differential amplifier are excellently matched in phase and amplitude over wide range of frequencies. Common mode feedback loop offsets differences between two signal paths. Possible applications of circuit are in oscilloscopes, integrated circuit logic tester, and other self contained instruments.
Forrester, Janet E
2015-12-01
Errors in the statistical presentation and analyses of data in the medical literature remain common despite efforts to improve the review process, including the creation of guidelines for authors and the use of statistical reviewers. This article discusses common elementary statistical errors seen in manuscripts recently submitted to Clinical Therapeutics and describes some ways in which authors and reviewers can identify errors and thus correct them before publication. A nonsystematic sample of manuscripts submitted to Clinical Therapeutics over the past year was examined for elementary statistical errors. Clinical Therapeutics has many of the same errors that reportedly exist in other journals. Authors require additional guidance to avoid elementary statistical errors and incentives to use the guidance. Implementation of reporting guidelines for authors and reviewers by journals such as Clinical Therapeutics may be a good approach to reduce the rate of statistical errors. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.
Switchable in-line monitor for multi-dimensional multiplexed photonic integrated circuit.
Chen, Guanyu; Yu, Yu; Ye, Mengyuan; Zhang, Xinliang
2016-06-27
A flexible monitor suitable for the discrimination of on-chip transmitted mode division multiplexed (MDM) and wavelength division multiplexed (WDM) signals is proposed and fabricated. By selectively extracting part of the incoming signals through the tunable wavelength and mode dependent drop filter, the in-line and switchable monitor can discriminate the wavelength, mode and power information of the transmitted signals. Being different from a conventional mode and wavelength demultiplexer, the monitor is specifically designed to ensure a flexible in-line monitoring. For demonstration, three mode and three wavelength multiplexed signals are successfully processed. Assisted by the integrated photodetectors (PDs), both the measured photo currents and eye diagrams validate the performance of the proposed device. The bit error ratio (BER) measurement results show less than 0.4 dB power penalty between different modes and ~2 dB power penalty for single wavelength and WDM cases under 10-9 BER level.
On-chip WDM mode-division multiplexing interconnection with optional demodulation function.
Ye, Mengyuan; Yu, Yu; Chen, Guanyu; Luo, Yuchan; Zhang, Xinliang
2015-12-14
We propose and fabricate a wavelength-division-multiplexing (WDM) compatible and multi-functional mode-division-multiplexing (MDM) integrated circuit, which can perform the mode conversion and multiplexing for the incoming multipath WDM signals, avoiding the wavelength conflict. An phase-to-intensity demodulation function can be optionally applied within the circuit while performing the mode multiplexing. For demonstration, 4 × 10 Gb/s non-return-to-zero differential phase shift keying (NRZ-DPSK) signals are successfully processed, with open and clear eye diagrams. Measured bit error ratio (BER) results show less than 1 dB receive sensitivity variation for three modes and four wavelengths with demodulation. In the case without demodulation, the average power penalties at 4 wavelengths are -1.5, -3 and -3.5 dB for TE₀-TE₀, TE₀-TE₁ and TE₀-TE₂ mode conversions, respectively. The proposed flexible scheme can be used at the interface of long-haul and on-chip communication systems.
Mode selecting switch using multimode interference for on-chip optical interconnects.
Priti, Rubana B; Pishvai Bazargani, Hamed; Xiong, Yule; Liboiron-Ladouceur, Odile
2017-10-15
A novel mode selecting switch (MSS) is experimentally demonstrated for on-chip mode-division multiplexing (MDM) optical interconnects. The MSS consists of a Mach-Zehnder interferometer with tapered multi-mode interference couplers and TiN thermo-optic phase shifters for conversion and switching between the optical data encoded on the fundamental and first-order quasi-transverse electric (TE) modes. The C-band MSS exhibits a >25 dB switching extinction ratio and < -12 dB crosstalk. We validate the dynamic switching with a 25.8 kHz gating signal measuring switching times for both TE0 and TE1 modes of <10.9 μs. All channels exhibit less than 1.7 dB power penalty at a 10 -12 bit error rate, while switching the non-return-to-zero PRBS-31 data signals at 10 Gb/s.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Cao, N; Young, L
2014-06-15
Purpose: Though FMEA (Failure Mode and Effects Analysis) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge it has never been validated against actual incident learning data. The objective of this study was to perform an FMEA analysis of an SBRT (Stereotactic Body Radiation Therapy) treatment planning process and validate this against data recorded within an incident learning system. Methods: FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, and dosimetrists. Potential failure modes were identified through a systematic review of the workflow process. Failuremore » modes were rated for severity, occurrence, and detectability on a scale of 1 to 10 and RPN (Risk Priority Number) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that had been active for two years. Differences were identified. Results: FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. FMEA failed to anticipate 13 of these events, among which 3 were registered with severity ratings of severe or critical in the incident learning system. Combining both methods yielded a total of 76 failure modes, and when scored for RPN the 13 events missed by FMEA ranked within the middle half of all failure modes. Conclusion: FMEA, though valuable, is subject to certain limitations, among them the limited ability to anticipate all potential errors for a given process. This FMEA exercise failed to identify a significant number of possible errors (17%). Integration of FMEA with retrospective incident data may be able to render an improved overview of risks within a process.« less
Li, Le-Bao; Sun, Ling-Ling; Zhang, Sheng-Zhou; Yang, Qing-Quan
2015-09-01
A new control approach for speed tracking and synchronization of multiple motors is developed, by incorporating an adaptive sliding mode control (ASMC) technique into a ring coupling synchronization control structure. This control approach can stabilize speed tracking of each motor and synchronize its motion with other motors' motion so that speed tracking errors and synchronization errors converge to zero. Moreover, an adaptive law is exploited to estimate the unknown bound of uncertainty, which is obtained in the sense of Lyapunov stability theorem to minimize the control effort and attenuate chattering. Performance comparisons with parallel control, relative coupling control and conventional PI control are investigated on a four-motor synchronization control system. Extensive simulation results show the effectiveness of the proposed control scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Daniel, Larry G.
The purposes of this paper are to identify common errors made by researchers when dealing with reliability coefficients and to outline best practices for reporting and interpreting reliability coefficients. Common errors that researchers make are: (1) stating that the instruments are reliable; (2) incorrectly interpreting correlation coefficients;…
The Effectiveness of Chinese NNESTs in Teaching English Syntax
ERIC Educational Resources Information Center
Chou, Chun-Hui; Bartz, Kevin
2007-01-01
This paper evaluates the effect of Chinese non-native English-speaking teachers (NNESTs) on Chinese ESL students' struggles with English syntax. The paper first classifies Chinese learners' syntactic errors into 10 common types. It demonstrates how each type of error results from an internal attempt to translate a common Chinese construction into…
Yang, Shu-Hui; Jerng, Jih-Shuin; Chen, Li-Chin; Li, Yu-Tsu; Huang, Hsiao-Fang; Wu, Chao-Ling; Chan, Jing-Yuan; Huang, Szu-Fen; Liang, Huey-Wen; Sun, Jui-Sheng
2017-11-03
Intra-hospital transportation (IHT) might compromise patient safety because of different care settings and higher demand on the human operation. Reports regarding the incidence of IHT-related patient safety events and human failures remain limited. To perform a retrospective analysis of IHT-related events, human failures and unsafe acts. A hospital-wide process for the IHT and database from the incident reporting system in a medical centre in Taiwan. All eligible IHT-related patient safety events between January 2010 to December 2015 were included. Incidence rate of IHT-related patient safety events, human failure modes, and types of unsafe acts. There were 206 patient safety events in 2 009 013 IHT sessions (102.5 per 1 000 000 sessions). Most events (n=148, 71.8%) did not involve patient harm, and process events (n=146, 70.9%) were most common. Events at the location of arrival (n=101, 49.0%) were most frequent; this location accounted for 61.0% and 44.2% of events with patient harm and those without harm, respectively (p<0.001). Of the events with human failures (n=186), the most common related process step was the preparation of the transportation team (n=91, 48.9%). Contributing unsafe acts included perceptual errors (n=14, 7.5%), decision errors (n=56, 30.1%), skill-based errors (n=48, 25.8%), and non-compliance (n=68, 36.6%). Multivariate analysis showed that human failure found in the arrival and hand-off sub-process (OR 4.84, p<0.001) was associated with increased patient harm, whereas the presence of omission (OR 0.12, p<0.001) was associated with less patient harm. This study shows a need to reduce human failures to prevent patient harm during intra-hospital transportation. We suggest that the transportation team pay specific attention to the sub-process at the location of arrival and prevent errors other than omissions. Long-term monitoring of IHT-related events is also warranted. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Parameter dependence of the MCNP electron transport in determining dose distributions.
Reynaert, N; Palmans, H; Thierens, H; Jeraj, R
2002-10-01
In this paper, a detailed study of the electron transport in MCNP is performed, separating the effects of the energy binning technique on the energy loss rate, the scattering angles, and the sub-step length as a function of energy. As this problem is already well known, in this paper we focus on the explanation as to why the default mode of MCNP can lead to large deviations. The resolution dependence was investigated as well. An error in the MCNP code in the energy binning technique in the default mode (DBCN 18 card = 0) was revealed, more specific in the updating of cross sections when a sub-step is performed corresponding to a high-energy loss. This updating error is not present in the ITS mode (DBCN 18 card = 1) and leads to a systematically lower dose deposition rate in the default mode. The effect is present for all energies studied (0.5-10 MeV) and depends on the geometrical resolution of the scoring regions and the energy grid resolution. The effect of the energy binning technique is of the same order of that of the updating error for energies below 2 MeV, and becomes less important for higher energies. For a 1 MeV point source surrounded by homogeneous water, the deviation of the default MCNP results at short distances attains 9% and remains approximately the same for all energies. This effect could be corrected by removing the completion of an energy step each time an electron changes from an energy bin during a sub-step. Another solution consists of performing all calculations in the ITS mode. Another problem is the resolution dependence, even in the ITS mode. The higher the resolution is chosen (the smaller the scoring regions) the faster the energy is deposited along the electron track. It is proven that this is caused by starting a new energy step when crossing a surface. The resolution effect should be investigated for every specific case when calculating dose distributions around beta sources. The resolution should not be higher than 0.85*(1-EFAC)*CSDA, where EFAC is the energy loss per energy step and CSDA a continuous slowing down approximation range. This effect could as well be removed by determining the cross sections for energy loss and multiple scattering at the average energy of an energy step and by sampling the cross sections for each sub-step. Overall, we conclude that MCNP cannot be used without a caution due to possible errors in the electron transport. When care is taken, it is possible to obtain correct results that are in agreement with other Monte Carlo codes.
Comparison of mode estimation methods and application in molecular clock analysis
NASA Technical Reports Server (NTRS)
Hedges, S. Blair; Shah, Prachi
2003-01-01
BACKGROUND: Distributions of time estimates in molecular clock studies are sometimes skewed or contain outliers. In those cases, the mode is a better estimator of the overall time of divergence than the mean or median. However, different methods are available for estimating the mode. We compared these methods in simulations to determine their strengths and weaknesses and further assessed their performance when applied to real data sets from a molecular clock study. RESULTS: We found that the half-range mode and robust parametric mode methods have a lower bias than other mode methods under a diversity of conditions. However, the half-range mode suffers from a relatively high variance and the robust parametric mode is more susceptible to bias by outliers. We determined that bootstrapping reduces the variance of both mode estimators. Application of the different methods to real data sets yielded results that were concordant with the simulations. CONCLUSION: Because the half-range mode is a simple and fast method, and produced less bias overall in our simulations, we recommend the bootstrapped version of it as a general-purpose mode estimator and suggest a bootstrap method for obtaining the standard error and 95% confidence interval of the mode.
The global burden of diagnostic errors in primary care
Singh, Hardeep; Schiff, Gordon D; Graber, Mark L; Onakpoya, Igho; Thompson, Matthew J
2017-01-01
Diagnosis is one of the most important tasks performed by primary care physicians. The World Health Organization (WHO) recently prioritized patient safety areas in primary care, and included diagnostic errors as a high-priority problem. In addition, a recent report from the Institute of Medicine in the USA, ‘Improving Diagnosis in Health Care’, concluded that most people will likely experience a diagnostic error in their lifetime. In this narrative review, we discuss the global significance, burden and contributory factors related to diagnostic errors in primary care. We synthesize available literature to discuss the types of presenting symptoms and conditions most commonly affected. We then summarize interventions based on available data and suggest next steps to reduce the global burden of diagnostic errors. Research suggests that we are unlikely to find a ‘magic bullet’ and confirms the need for a multifaceted approach to understand and address the many systems and cognitive issues involved in diagnostic error. Because errors involve many common conditions and are prevalent across all countries, the WHO’s leadership at a global level will be instrumental to address the problem. Based on our review, we recommend that the WHO consider bringing together primary care leaders, practicing frontline clinicians, safety experts, policymakers, the health IT community, medical education and accreditation organizations, researchers from multiple disciplines, patient advocates, and funding bodies among others, to address the many common challenges and opportunities to reduce diagnostic error. This could lead to prioritization of practice changes needed to improve primary care as well as setting research priorities for intervention development to reduce diagnostic error. PMID:27530239
Markovic, Marija; Mathis, A Scott; Ghin, Hoytin Lee; Gardiner, Michelle; Fahim, Germin
2017-01-01
To compare the medication history error rate of the emergency department (ED) pharmacy technician with that of nursing staff and to describe the workflow environment. Fifty medication histories performed by an ED nurse followed by the pharmacy technician were evaluated for discrepancies (RN-PT group). A separate 50 medication histories performed by the pharmacy technician and observed with necessary intervention by the ED pharmacist were evaluated for discrepancies (PT-RPh group). Discrepancies were totaled and categorized by type of error and therapeutic category of the medication. The workflow description was obtained by observation and staff interview. A total of 474 medications in the RN-PT group and 521 in the PT-RPh group were evaluated. Nurses made at least one error in all 50 medication histories (100%), compared to 18 medication histories for the pharmacy technician (36%). In the RN-PT group, 408 medications had at least one error, corresponding to an accuracy rate of 14% for nurses. In the PT-RPh group, 30 medications had an error, corresponding to an accuracy rate of 94.4% for the pharmacy technician ( P < 0.0001). The most common error made by nurses was a missing medication (n = 109), while the most common error for the pharmacy technician was a wrong medication frequency (n = 19). The most common drug class with documented errors for ED nurses was cardiovascular medications (n = 100), while the pharmacy technician made the most errors in gastrointestinal medications (n = 11). Medication histories obtained by the pharmacy technician were significantly more accurate than those obtained by nurses in the emergency department.
NASA Astrophysics Data System (ADS)
Wei, Wei
2005-11-01
In low gravity, the stability of liquid bridges and other systems having free surfaces is affected by the ambient vibration of the spacecraft. Such vibrations are expected to excite capillary modes. The lowest unstable mode of cylindrical liquid bridges, the (2,0) mode, is particularly sensitive to the vibration when the ratio of the bridge length to the diameter approaches pi. In this work, a Plateau tank has been used to simulate the weightless condition. An optical system has been used to detect the (2,0) mode oscillation amplitude and generate an error signal which is determined by the oscillation amplitude. This error signal is used by the feedback system to produce proper voltages on the electrodes which are concentric with the electrically conducting, grounded bridge. A mode-coupled electrostatic stress is thus generated on the surface of the bridge. The feedback system is designed such that the modal force applied by the Maxwell stress can be proportional to the modal amplitude or modal velocity, which is the derivative of the modal amplitude. Experiments done in the Plateau tank demonstrate that the damping of the capillary oscillation can be enhanced by using the electrostatic stress in proportion to the modal velocity. On the other hand, using the electrostatic stress in proportion to the modal amplitude can raise the natural frequency of the bridge oscillation. If a spacecraft vibration frequency is close to a capillary mode frequency, the amplitude gain can be used to shift the mode frequency away from that of the spacecraft and simultaneously add some artificial damping to further reduce the effect of g-jitter. It is found that the decay of a bridge (2,0) mode oscillation is well modeled by a Duffing equation with a small cubic soft-spring term. The nonlinearity of the bridge (3,0) mode is also studied. The experiments reveal the hysteresis of (3,0) mode bridge oscillations, and this behavior is a property of the soft nonlinearity of the bridge. Relevant to acoustical bridge stabilization, the theoretical radiation force on a compressible cylinder in an acoustic standing wave is also investigated.
Geolocation error tracking of ZY-3 three line cameras
NASA Astrophysics Data System (ADS)
Pan, Hongbo
2017-01-01
The high-accuracy geolocation of high-resolution satellite images (HRSIs) is a key issue for mapping and integrating multi-temporal, multi-sensor images. In this manuscript, we propose a new geometric frame for analysing the geometric error of a stereo HRSI, in which the geolocation error can be divided into three parts: the epipolar direction, cross base direction, and height direction. With this frame, we proved that the height error of three line cameras (TLCs) is independent of nadir images, and that the terrain effect has a limited impact on the geolocation errors. For ZY-3 error sources, the drift error in both the pitch and roll angle and its influence on the geolocation accuracy are analysed. Epipolar and common tie-point constraints are proposed to study the bundle adjustment of HRSIs. Epipolar constraints explain that the relative orientation can reduce the number of compensation parameters in the cross base direction and have a limited impact on the height accuracy. The common tie points adjust the pitch-angle errors to be consistent with each other for TLCs. Therefore, free-net bundle adjustment of a single strip cannot significantly improve the geolocation accuracy. Furthermore, the epipolar and common tie-point constraints cause the error to propagate into the adjacent strip when multiple strips are involved in the bundle adjustment, which results in the same attitude uncertainty throughout the whole block. Two adjacent strips-Orbit 305 and Orbit 381, covering 7 and 12 standard scenes separately-and 308 ground control points (GCPs) were used for the experiments. The experiments validate the aforementioned theory. The planimetric and height root mean square errors were 2.09 and 1.28 m, respectively, when two GCPs were settled at the beginning and end of the block.
Error Analysis in Mathematics. Technical Report #1012
ERIC Educational Resources Information Center
Lai, Cheng-Fei
2012-01-01
Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…
Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error
ERIC Educational Resources Information Center
Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju
2009-01-01
Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…
Lexical Errors and Accuracy in Foreign Language Writing. Second Language Acquisition
ERIC Educational Resources Information Center
del Pilar Agustin Llach, Maria
2011-01-01
Lexical errors are a determinant in gaining insight into vocabulary acquisition, vocabulary use and writing quality assessment. Lexical errors are very frequent in the written production of young EFL learners, but they decrease as learners gain proficiency. Misspellings are the most common category, but formal errors give way to semantic-based…
More on Systematic Error in a Boyle's Law Experiment
ERIC Educational Resources Information Center
McCall, Richard P.
2012-01-01
A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.
Random measurement error: Why worry? An example of cardiovascular risk factors.
Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H
2018-01-01
With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.
A new accuracy measure based on bounded relative error for time series forecasting
Twycross, Jamie; Garibaldi, Jonathan M.
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480
A new accuracy measure based on bounded relative error for time series forecasting.
Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.
NASA Astrophysics Data System (ADS)
Gupta, A. P.; Shanker, Jai
1980-02-01
The relation between long wavelength optical mode frequencies and the Anderson-Gruneisen parameter δ for alkali halides studied by Madan suffers from a mathematical error which is rectified in the present communication. A theoretical analysis of δ is presented adopting six potential functions for the short range repulsion energy. Values of δ and γTO calculated from the Varshni-Shukla potential are found in closest agreement with experimental data.
Dealing with Beam Structure in PIXIE
NASA Technical Reports Server (NTRS)
Fixsen, D. J.; Kogut, Alan; Hill, Robert S.; Nagler, Peter C.; Seals, Lenward T., III; Howard, Joseph M.
2016-01-01
Measuring the B-mode polarization of the CMB radiation requires a detailed understanding of the projection of the detector onto the sky. We show how the combination of scan strategy and processing generates a cylindrical beam for the spectrum measurement. Both the instrumental design and the scan strategy reduce the cross coupling between the temperature variations and the B-modes. As with other polarization measurements some post processing may be required to eliminate residual errors.
2015-11-24
spatial concerns: ¤ how well are gradients captured? (resolution requirement) spatial/temporal concerns: ¤ dispersion and dissipation error...distribution is unlimited. Gradient Capture vs. Resolution: Single Mode FFT: Solution/Derivative: Convergence: f x( )= sin(x) with x∈[0,2π ] df dx...distribution is unlimited. Gradient Capture vs. Resolution: Multiple Modes FFT: Solution/Derivative: Convergence: 6 __ CD02 __ CD04 __ CD06
Amplifier for measuring low-level signals in the presence of high common mode voltage
NASA Technical Reports Server (NTRS)
Lukens, F. E. (Inventor)
1985-01-01
A high common mode rejection differential amplifier wherein two serially arranged Darlington amplifier stages are employed and any common mode voltage is divided between them by a resistance network. The input to the first Darlington amplifier stage is coupled to a signal input resistor via an amplifier which isolates the input and presents a high impedance across this resistor. The output of the second Darlington stage is transposed in scale via an amplifier stage which has its input a biasing circuit which effects a finite biasing of the two Darlington amplifier stages.
Automatic control of finite element models for temperature-controlled radiofrequency ablation.
Haemmerich, Dieter; Webster, John G
2005-07-14
The finite element method (FEM) has been used to simulate cardiac and hepatic radiofrequency (RF) ablation. The FEM allows modeling of complex geometries that cannot be solved by analytical methods or finite difference models. In both hepatic and cardiac RF ablation a common control mode is temperature-controlled mode. Commercial FEM packages don't support automating temperature control. Most researchers manually control the applied power by trial and error to keep the tip temperature of the electrodes constant. We implemented a PI controller in a control program written in C++. The program checks the tip temperature after each step and controls the applied voltage to keep temperature constant. We created a closed loop system consisting of a FEM model and the software controlling the applied voltage. The control parameters for the controller were optimized using a closed loop system simulation. We present results of a temperature controlled 3-D FEM model of a RITA model 30 electrode. The control software effectively controlled applied voltage in the FEM model to obtain, and keep electrodes at target temperature of 100 degrees C. The closed loop system simulation output closely correlated with the FEM model, and allowed us to optimize control parameters. The closed loop control of the FEM model allowed us to implement temperature controlled RF ablation with minimal user input.
Chua, S S; Tea, M H; Rahman, M H A
2009-04-01
Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.
Single-Event Upset Characterization of Common First- and Second-Order All-Digital Phase-Locked Loops
NASA Astrophysics Data System (ADS)
Chen, Y. P.; Massengill, L. W.; Kauppila, J. S.; Bhuva, B. L.; Holman, W. T.; Loveless, T. D.
2017-08-01
The single-event upset (SEU) vulnerability of common first- and second-order all-digital-phase-locked loops (ADPLLs) is investigated through field-programmable gate array-based fault injection experiments. SEUs in the highest order pole of the loop filter and fraction-based phase detectors (PDs) may result in the worst case error response, i.e., limit cycle errors, often requiring system restart. SEUs in integer-based linear PDs may result in loss-of-lock errors, while SEUs in bang-bang PDs only result in temporary-frequency errors. ADPLLs with the same frequency tuning range but fewer bits in the control word exhibit better overall SEU performance.
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Handschuh, R. F.; Zhang, J.
1988-01-01
A method for generation of crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc second for the numerical examples). Tooth Contact Analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and the bearing contact.
NASA Technical Reports Server (NTRS)
Frantz, Brian D.; Ivancic, William D.
2001-01-01
Asynchronous Transfer Mode (ATM) Quality of Service (QoS) experiments using the Transmission Control Protocol/Internet Protocol (TCP/IP) were performed for various link delays. The link delay was set to emulate a Wide Area Network (WAN) and a Satellite Link. The purpose of these experiments was to evaluate the ATM QoS requirements for applications that utilize advance TCP/IP protocols implemented with large windows and Selective ACKnowledgements (SACK). The effects of cell error, cell loss, and random bit errors on throughput were reported. The detailed test plan and test results are presented herein.
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Dam, M A; Mignant, D L; Macintosh, B A
In this paper, the adaptive optics (AO) system at the W.M. Keck Observatory is characterized. The authors calculate the error budget of the Keck AO system operating in natural guide star mode with a near infrared imaging camera. By modeling the control loops and recording residual centroids, the measurement noise and band-width errors are obtained. The error budget is consistent with the images obtained. Results of sky performance tests are presented: the AO system is shown to deliver images with average Strehl ratios of up to 0.37 at 1.58 {micro}m using a bright guide star and 0.19 for a magnitudemore » 12 star.« less
Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, M D; Cole, S; Frenk, C S
2011-02-14
We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less
Refractive errors in Mercyland Specialist Hospital, Osogbo, Western Nigeria.
Adeoti, C O; Egbewale, B E
2008-06-01
The study was conducted to determine the magnitude and pattern of refractive errors in order to provide facilities for its management. A prospective study of 3601 eyes of 1824 consective patients was conducted. Information obtained included age, sex, occupation, visual acuity, type and degree of refractive error. The data was analysed using Statistical Package for Social Sciences 11.0 version) Computer Software. Refractive error was found in 1824(53.71%) patients. There were 832(45.61%) males and 992(54.39%) females with a mean age of 35.55. Myopia was the commonest (1412(39.21% eyes). Others include hypermetropia (840(23.33% eyes), astigmatism (785(21.80%) and 820 patients (1640 eyes) had presbyopia. Anisometropia was present in 791(44.51%) of 1777 patients that had bilateral refractive errors. Two thousand two hundred and fifty two eyes has spherical errors. Out of 2252 eyes with spherical errors, 1308 eyes (58.08%) had errors -0.50 to +0.50 dioptres, 567 eyes (25.18%) had errors less than -0.50 dioptres of whom 63 eyes (2.80%) had errors less than -5.00 dioptres while 377 eyes (16.74%) had errors greater than +0.50 dioptres of whom 81 eyes (3.60%) had errors greater than +2.00 dioptres. The highest error was 20.00 dioptres for myopia and 18.00 dioptres for hypermetropia. Refractive error is common in this environment. Adequate provision should be made for its correction bearing in mind the common types and degrees.
Steinberger, Dina M; Douglas, Stephen V; Kirschbaum, Mark S
2009-09-01
A multidisciplinary team from the University of Wisconsin Hospital and Clinics transplant program used failure mode and effects analysis to proactively examine opportunities for communication and handoff failures across the continuum of care from organ procurement to transplantation. The team performed a modified failure mode and effects analysis that isolated the multiple linked, serial, and complex information exchanges occurring during the transplantation of one solid organ. Failure mode and effects analysis proved effective for engaging a diverse group of persons who had an investment in the outcome in analysis and discussion of opportunities to improve the system's resilience for avoiding errors during a time-pressured and complex process.
Standard solar model. II - g-modes
NASA Technical Reports Server (NTRS)
Guenther, D. B.; Demarque, P.; Pinsonneault, M. H.; Kim, Y.-C.
1992-01-01
The paper presents the g-mode oscillation for a set of modern solar models. Each solar model is based on a single modification or improvement to the physics of a reference solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The error in the predicted g-mode periods associated with the uncertainties in the model physics is predicted and the specific sensitivities of the g-mode periods and their period spacings to the different model structures are described. In addition, these models are compared to a sample of published observations. A remarkably good agreement is found between the 'best' solar model and the observations of Hill and Gu (1990).
Advanced Interval Type-2 Fuzzy Sliding Mode Control for Robot Manipulator.
Hwang, Ji-Hwan; Kang, Young-Chang; Park, Jong-Wook; Kim, Dong W
2017-01-01
In this paper, advanced interval type-2 fuzzy sliding mode control (AIT2FSMC) for robot manipulator is proposed. The proposed AIT2FSMC is a combination of interval type-2 fuzzy system and sliding mode control. For resembling a feedback linearization (FL) control law, interval type-2 fuzzy system is designed. For compensating the approximation error between the FL control law and interval type-2 fuzzy system, sliding mode controller is designed, respectively. The tuning algorithms are derived in the sense of Lyapunov stability theorem. Two-link rigid robot manipulator with nonlinearity is used to test and the simulation results are presented to show the effectiveness of the proposed method that can control unknown system well.
Reusable Launch Vehicle Control in Multiple Time Scale Sliding Modes
NASA Technical Reports Server (NTRS)
Shtessel, Yuri
1999-01-01
A reusable launch vehicle control problem during ascent is addressed via multiple-time scaled continuous sliding mode control. The proposed sliding mode controller utilizes a two-loop structure and provides robust, de-coupled tracking of both orientation angle command profiles and angular rate command profiles in the presence of bounded external disturbances and plant uncertainties. Sliding mode control causes the angular rate and orientation angle tracking error dynamics to be constrained to linear, de-coupled, homogeneous, and vector valued differential equations with desired eigenvalues placement. The dual-time scale sliding mode controller was designed for the X-33 technology demonstration sub-orbital launch vehicle in the launch mode. 6DOF simulation results show that the designed controller provides robust, accurate, de-coupled tracking of the orientation angle command profiles in presence of external disturbances and vehicle inertia uncertainties. It creates possibility to operate the X-33 vehicle in an aircraft-like mode with reduced pre-launch adjustment of the control system.
NASA Technical Reports Server (NTRS)
Seasholtz, R. G.
1977-01-01
A laser Doppler velocimeter (LDV) built for use in the Lewis Research Center's turbine stator cascade facilities is described. The signal processing and self contained data processing are based on a computing counter. A procedure is given for mode matching the laser to the probe volume. An analysis is presented of biasing errors that were observed in turbulent flow when the mean flow was not normal to the fringes.
Attitude Control Subsystem for the Advanced Communications Technology Satellite
NASA Technical Reports Server (NTRS)
Hewston, Alan W.; Mitchell, Kent A.; Sawicki, Jerzy T.
1996-01-01
This paper provides an overview of the on-orbit operation of the Attitude Control Subsystem (ACS) for the Advanced Communications Technology Satellite (ACTS). The three ACTS control axes are defined, including the means for sensing attitude and determining the pointing errors. The desired pointing requirements for various modes of control as well as the disturbance torques that oppose the control are identified. Finally, the hardware actuators and control loops utilized to reduce the attitude error are described.
Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...
2015-12-10
We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less
[Failure mode and effects analysis on computerized drug prescriptions].
Paredes-Atenciano, J A; Roldán-Aviña, J P; González-García, Mercedes; Blanco-Sánchez, M C; Pinto-Melero, M A; Pérez-Ramírez, C; Calvo Rubio-Burgos, Miguel; Osuna-Navarro, F J; Jurado-Carmona, A M
2015-01-01
To identify and analyze errors in drug prescriptions of patients treated in a "high resolution" hospital by applying a Failure mode and effects analysis (FMEA).Material and methods A multidisciplinary group of medical specialties and nursing analyzed medical records where drug prescriptions were held in free text format. An FMEA was developed in which the risk priority index (RPI) was obtained from a cross-sectional observational study using an audit of the medical records, carried out in 2 phases: 1) Pre-intervention testing, and (2) evaluation of improvement actions after the first analysis. An audit sample size of 679 medical records from a total of 2,096 patients was calculated using stratified sampling and random selection of clinical events. Prescription errors decreased by 22.2% in the second phase. FMEA showed a greater RPI in "unspecified route of administration" and "dosage unspecified", with no significant decreases observed in the second phase, although it did detect, "incorrect dosing time", "contraindication due to drug allergy", "wrong patient" or "duplicate prescription", which resulted in the improvement of prescriptions. Drug prescription errors have been identified and analyzed by FMEA methodology, improving the clinical safety of these prescriptions. This tool allows updates of electronic prescribing to be monitored. To avoid such errors would require the mandatory completion of all sections of a prescription. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Böning, Guido; Todica, Andrei; Vai, Alessandro; Lehner, Sebastian; Xiong, Guoming; Mille, Erik; Ilhan, Harun; la Fougère, Christian; Bartenstein, Peter; Hacker, Marcus
2013-11-01
The assessment of left ventricular function, wall motion and myocardial viability using electrocardiogram (ECG)-gated [18F]-FDG positron emission tomography (PET) is widely accepted in human and in preclinical small animal studies. The nonterminal and noninvasive approach permits repeated in vivo evaluations of the same animal, facilitating the assessment of temporal changes in disease or therapy response. Although well established, gated small animal PET studies can contain erroneous gating information, which may yield to blurred images and false estimation of functional parameters. In this work, we present quantitative and visual quality control (QC) methods to evaluate the accuracy of trigger events in PET list-mode and physiological data. Left ventricular functional analysis is performed to quantify the effect of gating errors on the end-systolic and end-diastolic volumes, and on the ejection fraction (EF). We aim to recover the cardiac functional parameters by the application of the commonly established heart rate filter approach using fixed ranges based on a standardized population. In addition, we propose a fully reprocessing approach which retrospectively replaces the gating information of the PET list-mode file with appropriate list-mode decoding and encoding software. The signal of a simultaneously acquired ECG is processed using standard MATLAB vector functions, which can be individually adapted to reliably detect the R-peaks. Finally, the new trigger events are inserted into the PET list-mode file. A population of 30 mice with various health statuses was analyzed and standard cardiac parameters such as mean heart rate (119 ms ± 11.8 ms) and mean heart rate variability (1.7 ms ± 3.4 ms) derived. These standard parameter ranges were taken into account in the QC methods to select a group of nine optimal gated and a group of eight sub-optimal gated [18F]-FDG PET scans of mice from our archive. From the list-mode files of the optimal gated group, we randomly deleted various fractions (5% to 60%) of contained trigger events to generate a corrupted group. The filter approach was capable to correct the corrupted group and yield functional parameters with no significant difference to the optimal gated group. We successfully demonstrated the potential of the fully reprocessing approach by applying it to the sub-optimal group, where the functional parameters were significantly improved after reprocessing (mean EF from 41% ± 16% to 60% ± 13%). When applied to the optimal gated group the fully reprocessing approach did not alter the functional parameters significantly (mean EF from 64% ± 8% to 64 ± 7%). This work presents methods to determine and quantify erroneous gating in small animal gated [18F]-FDG PET scans. We demonstrate the importance of a quality check for cardiac triggering contained in PET list-mode data and the benefit of optionally reprocessing the fully recorded physiological information to retrospectively modify or fully replace the cardiac triggering in PET list-mode data. We aim to provide a preliminary guideline of how to proceed in the presence of errors and demonstrate that offline reprocessing by filtering erroneous trigger events and retrospective gating by ECG processing is feasible. Future work will focus on the extension by additional QC methods, which may exploit the amplitude of trigger events and ECG signal by means of pattern recognition. Furthermore, we aim to transfer the proposed QC methods and the fully reprocessing approach to human myocardial PET/CT.
Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago
2015-08-01
The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most accurate method to measure the motion and strain with an average median motion error of 0.42 mm and a median strain error of 2.0 ± 0.9%, 2.1 ± 1.3% and 7.1 ± 4.9% for circumferential, longitudinal and radial strain respectively. It also showed its capability to identify abnormal segments with reduced cardiac function and timing differences for the dyssynchrony cases. These results indicate that the proposed diffeomorphic speckle tracking method provides robust and accurate motion and strain estimation. Copyright © 2015. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Liu, Wei; Sneeuw, Nico; Jiang, Weiping
2017-04-01
GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.
Pereira, Tiago V; Mingroni-Netto, Regina C
2011-06-06
The generalized odds ratio (GOR) was recently suggested as a genetic model-free measure for association studies. However, its properties were not extensively investigated. We used Monte Carlo simulations to investigate type-I error rates, power and bias in both effect size and between-study variance estimates of meta-analyses using the GOR as a summary effect, and compared these results to those obtained by usual approaches of model specification. We further applied the GOR in a real meta-analysis of three genome-wide association studies in Alzheimer's disease. For bi-allelic polymorphisms, the GOR performs virtually identical to a standard multiplicative model of analysis (e.g. per-allele odds ratio) for variants acting multiplicatively, but augments slightly the power to detect variants with a dominant mode of action, while reducing the probability to detect recessive variants. Although there were differences among the GOR and usual approaches in terms of bias and type-I error rates, both simulation- and real data-based results provided little indication that these differences will be substantial in practice for meta-analyses involving bi-allelic polymorphisms. However, the use of the GOR may be slightly more powerful for the synthesis of data from tri-allelic variants, particularly when susceptibility alleles are less common in the populations (≤10%). This gain in power may depend on knowledge of the direction of the effects. For the synthesis of data from bi-allelic variants, the GOR may be regarded as a multiplicative-like model of analysis. The use of the GOR may be slightly more powerful in the tri-allelic case, particularly when susceptibility alleles are less common in the populations.
ERIC Educational Resources Information Center
Kolitsoe Moru, Eunice; Qhobela, Makomosela
2013-01-01
The study investigated teachers' pedagogical content knowledge of common students' errors and misconceptions in sets. Five mathematics teachers from one Lesotho secondary school were the sample of the study. Questionnaires and interviews were used for data collection. The results show that teachers were able to identify the following students'…
Addressing Common Student Errors with Classroom Voting in Multivariable Calculus
ERIC Educational Resources Information Center
Cline, Kelly; Parker, Mark; Zullo, Holly; Stewart, Ann
2012-01-01
One technique for identifying and addressing common student errors is the method of classroom voting, in which the instructor presents a multiple-choice question to the class, and after a few minutes for consideration and small group discussion, each student votes on the correct answer, often using a hand-held electronic clicker. If a large number…
NASA Astrophysics Data System (ADS)
Murillo Feo, C. A.; Martnez Martinez, L. J.; Correa Muñoz, N. A.
2016-06-01
The accuracy of locating attributes on topographic surfaces when, using GPS in mountainous areas, is affected by obstacles to wave propagation. As part of this research on the semi-automatic detection of landslides, we evaluate the accuracy and spatial distribution of the horizontal error in GPS positioning in the tertiary road network of six municipalities located in mountainous areas in the department of Cauca, Colombia, using geo-referencing with GPS mapping equipment and static-fast and pseudo-kinematic methods. We obtained quality parameters for the GPS surveys with differential correction, using a post-processing method. The consolidated database underwent exploratory analyses to determine the statistical distribution, a multivariate analysis to establish relationships and partnerships between the variables, and an analysis of the spatial variability and calculus of accuracy, considering the effect of non-Gaussian distribution errors. The evaluation of the internal validity of the data provide metrics with a confidence level of 95% between 1.24 and 2.45 m in the static-fast mode and between 0.86 and 4.2 m in the pseudo-kinematic mode. The external validity had an absolute error of 4.69 m, indicating that this descriptor is more critical than precision. Based on the ASPRS standard, the scale obtained with the evaluated equipment was in the order of 1:20000, a level of detail expected in the landslide-mapping project. Modelling the spatial variability of the horizontal errors from the empirical semi-variogram analysis showed predictions errors close to the external validity of the devices.
Mortaro, Alberto; Pascu, Diana; Zerman, Tamara; Vallaperta, Enrico; Schönsberg, Alberto; Tardivo, Stefano; Pancheri, Serena; Romano, Gabriele; Moretti, Francesca
2015-07-01
The role of the emergency medical dispatch centre (EMDC) is essential to ensure coordinated and safe prehospital care. The aim of this study was to implement an incident report (IR) system in prehospital emergency care management with a view to detecting errors occurring in this setting and guiding the implementation of safety improvement initiatives. An ad hoc IR form for the prehospital setting was developed and implemented within the EMDC of Verona. The form included six phases (from the emergency call to hospital admission) with the relevant list of potential error modes (30 items). This descriptive observational study considered the results from 268 consecutive days between February and November 2010. During the study period, 161 error modes were detected. The majority of these errors occurred in the resource allocation and timing phase (34.2%) and in the dispatch phase (31.0%). Most of the errors were due to human factors (77.6%), and almost half of them were classified as either moderate (27.9%) or severe (19.9%). These results guided the implementation of specific corrective actions, such as the adoption of a more efficient Medical Priority Dispatch System and the development of educational initiatives targeted at both EMDC staff and the population. Despite the intrinsic limits of IR methodology, results suggest how the implementation of an IR system dedicated to the emergency prehospital setting can act as a major driver for the development of a "learning organization" and improve both efficacy and safety of first aid care.
The Nature of Error in Adolescent Student Writing
ERIC Educational Resources Information Center
Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang
2014-01-01
This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
[Errors in prescriptions and their preparation at the outpatient pharmacy of a regional hospital].
Alvarado A, Carolina; Ossa G, Ximena; Bustos M, Luis
2017-01-01
Adverse effects of medications are an important cause of morbidity and hospital admissions. Errors in prescription or preparation of medications by pharmacy personnel are a factor that may influence these occurrence of the adverse effects Aim: To assess the frequency and type of errors in prescriptions and in their preparation at the pharmacy unit of a regional public hospital. Prescriptions received by ambulatory patients and those being discharged from the hospital, were reviewed using a 12-item checklist. The preparation of such prescriptions at the pharmacy unit was also reviewed using a seven item checklist. Seventy two percent of prescriptions had at least one error. The most common mistake was the impossibility of determining the concentration of the prescribed drug. Prescriptions for patients being discharged from the hospital had the higher number of errors. When a prescription had more than two drugs, the risk of error increased 2.4 times. Twenty four percent of prescription preparations had at least one error. The most common mistake was the labeling of drugs with incomplete medical indications. When a preparation included more than three drugs, the risk of preparation error increased 1.8 times. Prescription and preparation of medication delivered to patients had frequent errors. The most important risk factor for errors was the number of drugs prescribed.
A hybrid method for synthetic aperture ladar phase-error compensation
NASA Astrophysics Data System (ADS)
Hua, Zhili; Li, Hongping; Gu, Yongjian
2009-07-01
As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.
Huang, Hao; Milione, Giovanni; Lavery, Martin P. J.; Xie, Guodong; Ren, Yongxiong; Cao, Yinwen; Ahmed, Nisar; An Nguyen, Thien; Nolan, Daniel A.; Li, Ming-Jun; Tur, Moshe; Alfano, Robert R.; Willner, Alan E.
2015-01-01
Mode division multiplexing (MDM)– using a multimode optical fiber’s N spatial modes as data channels to transmit N independent data streams – has received interest as it can potentially increase optical fiber data transmission capacity N-times with respect to single mode optical fibers. Two challenges of MDM are (1) designing mode (de)multiplexers with high mode selectivity (2) designing mode (de)multiplexers without cascaded beam splitting’s 1/N insertion loss. One spatial mode basis that has received interest is that of orbital angular momentum (OAM) modes. In this paper, using a device referred to as an OAM mode sorter, we show that OAM modes can be (de)multiplexed over a multimode optical fiber with higher than −15 dB mode selectivity and without cascaded beam splitting’s 1/N insertion loss. As a proof of concept, the OAM modes of the LP11 mode group (OAM−1,0 and OAM+1,0), each carrying 20-Gbit/s polarization division multiplexed and quadrature phase shift keyed data streams, are transmitted 5km over a graded-index, few-mode optical fibre. Channel crosstalk is mitigated using 4 × 4 multiple-input-multiple-output digital-signal-processing with <1.5 dB power penalties at a bit-error-rate of 2 × 10−3. PMID:26450398
Huang, Hao; Milione, Giovanni; Lavery, Martin P J; Xie, Guodong; Ren, Yongxiong; Cao, Yinwen; Ahmed, Nisar; An Nguyen, Thien; Nolan, Daniel A; Li, Ming-Jun; Tur, Moshe; Alfano, Robert R; Willner, Alan E
2015-10-09
Mode division multiplexing (MDM)- using a multimode optical fiber's N spatial modes as data channels to transmit N independent data streams - has received interest as it can potentially increase optical fiber data transmission capacity N-times with respect to single mode optical fibers. Two challenges of MDM are (1) designing mode (de)multiplexers with high mode selectivity (2) designing mode (de)multiplexers without cascaded beam splitting's 1/N insertion loss. One spatial mode basis that has received interest is that of orbital angular momentum (OAM) modes. In this paper, using a device referred to as an OAM mode sorter, we show that OAM modes can be (de)multiplexed over a multimode optical fiber with higher than -15 dB mode selectivity and without cascaded beam splitting's 1/N insertion loss. As a proof of concept, the OAM modes of the LP11 mode group (OAM-1,0 and OAM+1,0), each carrying 20-Gbit/s polarization division multiplexed and quadrature phase shift keyed data streams, are transmitted 5km over a graded-index, few-mode optical fibre. Channel crosstalk is mitigated using 4 × 4 multiple-input-multiple-output digital-signal-processing with <1.5 dB power penalties at a bit-error-rate of 2 × 10(-3).
NASA Astrophysics Data System (ADS)
Huang, Hao; Milione, Giovanni; Lavery, Martin P. J.; Xie, Guodong; Ren, Yongxiong; Cao, Yinwen; Ahmed, Nisar; An Nguyen, Thien; Nolan, Daniel A.; Li, Ming-Jun; Tur, Moshe; Alfano, Robert R.; Willner, Alan E.
2015-10-01
Mode division multiplexing (MDM)- using a multimode optical fiber’s N spatial modes as data channels to transmit N independent data streams - has received interest as it can potentially increase optical fiber data transmission capacity N-times with respect to single mode optical fibers. Two challenges of MDM are (1) designing mode (de)multiplexers with high mode selectivity (2) designing mode (de)multiplexers without cascaded beam splitting’s 1/N insertion loss. One spatial mode basis that has received interest is that of orbital angular momentum (OAM) modes. In this paper, using a device referred to as an OAM mode sorter, we show that OAM modes can be (de)multiplexed over a multimode optical fiber with higher than -15 dB mode selectivity and without cascaded beam splitting’s 1/N insertion loss. As a proof of concept, the OAM modes of the LP11 mode group (OAM-1,0 and OAM+1,0), each carrying 20-Gbit/s polarization division multiplexed and quadrature phase shift keyed data streams, are transmitted 5km over a graded-index, few-mode optical fibre. Channel crosstalk is mitigated using 4 × 4 multiple-input-multiple-output digital-signal-processing with <1.5 dB power penalties at a bit-error-rate of 2 × 10-3.
Effects of vibration on inertial wind-tunnel model attitude measurement devices
NASA Technical Reports Server (NTRS)
Young, Clarence P., Jr.; Buehrle, Ralph D.; Balakrishna, S.; Kilgore, W. Allen
1994-01-01
Results of an experimental study of a wind tunnel model inertial angle-of-attack sensor response to a simulated dynamic environment are presented. The inertial device cannot distinguish between the gravity vector and the centrifugal accelerations associated with wind tunnel model vibration, this situation results in a model attitude measurement bias error. Significant bias error in model attitude measurement was found for the model system tested. The model attitude bias error was found to be vibration mode and amplitude dependent. A first order correction model was developed and used for estimating attitude measurement bias error due to dynamic motion. A method for correcting the output of the model attitude inertial sensor in the presence of model dynamics during on-line wind tunnel operation is proposed.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Smith, Everett V., Jr.
2007-01-01
Measurement error is a common theme in classical measurement models used in testing and assessment. In classical measurement models, the definition of measurement error and the subsequent reliability coefficients differ on the basis of the test administration design. Internal consistency reliability specifies error due primarily to poor item…
Patient dosimetry audit for establishing local diagnostic reference levels for nuclear medicine CT.
Gardner, Matthew; Katsidzira, Ngonidzashe M; Ross, Erin; Larkin, Elizabeth A
2017-03-01
To establish a system for patient dosimetry audit and setting of local diagnostic reference levels (LDRLs) for nuclear medicine (NM) CT. Computed radiological information system (CRIS) data were matched with NM paper records, which provided the body region and dose mode for NMCT carried out at a large UK hospital. It was necessary to divide data in terms of the NM examination type, body region and dose mode. The mean and standard deviation dose-length products (DLPs) for common NMCT examinations were then calculated and compared with the proposed National Diagnostic Reference Levels (NDRLs). Only procedures which have 10 or more patients will be used to suggest LDRLs. For most examinations, the mean DLPs do not exceed the proposed NDRLs. The bone single-photon emission CT/CT lumbar spine data clearly show the need to divide data according to the purpose of the scan (dose mode), with mean (±standard error) DLPs ranging from 51 ± 5 mGy cm (low dose) to 1086 ± 124 mGy cm (metal dose). A system for NMCT patient dose audit has been developed, but there are non-trivial challenges which make the process labour intensive. These include limited information provided by CRIS downloads, dependence on paper records and limited number of examinations available owing to the need to subdivide data. Advances in knowledge: This article demonstrates that a system can be developed for NMCT patient dose audit, but also highlights the challenges associated with such audit, which may not be encountered with more routine audit of radiology CT.
Assumption-free estimation of the genetic contribution to refractive error across childhood.
Guggenheim, Jeremy A; St Pourcain, Beate; McMahon, George; Timpson, Nicholas J; Evans, David M; Williams, Cathy
2015-01-01
Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75-90%, families 15-70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). The variance in refractive error explained by the SNPs ("SNP heritability") was stable over childhood: Across age 7-15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8-9 years old. Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.
The global burden of diagnostic errors in primary care.
Singh, Hardeep; Schiff, Gordon D; Graber, Mark L; Onakpoya, Igho; Thompson, Matthew J
2017-06-01
Diagnosis is one of the most important tasks performed by primary care physicians. The World Health Organization (WHO) recently prioritized patient safety areas in primary care, and included diagnostic errors as a high-priority problem. In addition, a recent report from the Institute of Medicine in the USA, 'Improving Diagnosis in Health Care ', concluded that most people will likely experience a diagnostic error in their lifetime. In this narrative review, we discuss the global significance, burden and contributory factors related to diagnostic errors in primary care. We synthesize available literature to discuss the types of presenting symptoms and conditions most commonly affected. We then summarize interventions based on available data and suggest next steps to reduce the global burden of diagnostic errors. Research suggests that we are unlikely to find a 'magic bullet' and confirms the need for a multifaceted approach to understand and address the many systems and cognitive issues involved in diagnostic error. Because errors involve many common conditions and are prevalent across all countries, the WHO's leadership at a global level will be instrumental to address the problem. Based on our review, we recommend that the WHO consider bringing together primary care leaders, practicing frontline clinicians, safety experts, policymakers, the health IT community, medical education and accreditation organizations, researchers from multiple disciplines, patient advocates, and funding bodies among others, to address the many common challenges and opportunities to reduce diagnostic error. This could lead to prioritization of practice changes needed to improve primary care as well as setting research priorities for intervention development to reduce diagnostic error. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Markovic, Marija; Mathis, A. Scott; Ghin, Hoytin Lee; Gardiner, Michelle; Fahim, Germin
2017-01-01
Purpose: To compare the medication history error rate of the emergency department (ED) pharmacy technician with that of nursing staff and to describe the workflow environment. Methods: Fifty medication histories performed by an ED nurse followed by the pharmacy technician were evaluated for discrepancies (RN-PT group). A separate 50 medication histories performed by the pharmacy technician and observed with necessary intervention by the ED pharmacist were evaluated for discrepancies (PT-RPh group). Discrepancies were totaled and categorized by type of error and therapeutic category of the medication. The workflow description was obtained by observation and staff interview. Results: A total of 474 medications in the RN-PT group and 521 in the PT-RPh group were evaluated. Nurses made at least one error in all 50 medication histories (100%), compared to 18 medication histories for the pharmacy technician (36%). In the RN-PT group, 408 medications had at least one error, corresponding to an accuracy rate of 14% for nurses. In the PT-RPh group, 30 medications had an error, corresponding to an accuracy rate of 94.4% for the pharmacy technician (P < 0.0001). The most common error made by nurses was a missing medication (n = 109), while the most common error for the pharmacy technician was a wrong medication frequency (n = 19). The most common drug class with documented errors for ED nurses was cardiovascular medications (n = 100), while the pharmacy technician made the most errors in gastrointestinal medications (n = 11). Conclusion: Medication histories obtained by the pharmacy technician were significantly more accurate than those obtained by nurses in the emergency department. PMID:28090164
NSLS-II BPM System Protection from Rogue Mode Coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blednykh, A.; Bach, B.; Borrelli, A.
2011-03-28
Rogue mode RF shielding has been successfully designed and implemented into the production multipole vacuum chambers. In order to avoid systematic errors in the NSLS-II BPM system we introduced frequency shift of HOM's by using RF metal shielding located in the antechamber slot of each multipole vacuum chamber. To satisfy the pumping requirement the face of the shielding has been perforated with roughly 50 percent transparency. It stays clear of synchrotron radiation in each chamber.
NASA Astrophysics Data System (ADS)
Grunwald, Warren; Holden, Bobby; Barnes, Derek; Allan, Gregory; Mehrle, Nicholas; Douglas, Ewan S.; Cahoy, Kerri
2018-01-01
The Deformable Mirror (DeMi) CubeSat mission utilizes an Adaptive Optics (AO) control loop to correct incoming wavefronts as a technology demonstration for space-based imaging missions, such as high contrast observations (Earthlike exoplanets) and steering light into core single mode fibers for amplification. While AO has been used extensively on ground based systems to correct for atmospheric aberrations, operating an AO system on-board a small satellite presents different challenges. The DeMi payload 140 actuator MEMS deformable mirror (DM) corrects the incoming wavefront in four different control modes: 1) internal observation with a Shack-Hartmann Wavefront Sensor (SHWFS), 2) internal observation with an image plane sensor, 3) external observation with a SHWFS, and 4) external observation with an image plane sensor. All modes have wavefront aberration from two main sources, time-invariant launch disturbances that have changed the optical path from the expected path when calibrated in the lab and very low temporal frequency thermal variations as DeMi orbits the Earth. The external observation modes has additional error from: the pointing precision error from the attitude control system and reaction wheel jitter. Updates on DeMi’s mechanical, thermal, electrical, and mission design are also presented. The analysis from the DeMi payload simulations and testing provides information on the design options when developing space-based AO systems.
Performance Evaluation of Dual-axis Tracking System of Parabolic Trough Solar Collector
NASA Astrophysics Data System (ADS)
Ullah, Fahim; Min, Kang
2018-01-01
A parabolic trough solar collector with the concentration ratio of 24 was developed in the College of Engineering; Nanjing Agricultural University, China with the using of the TracePro software an optical model built. Effects of single-axis and dual-axis tracking modes, azimuth and elevating angle tracking errors on the optical performance were investigated and the thermal performance of the solar collector was experimentally measured. The results showed that the optical efficiency of the dual-axis tracking was 0.813% and its year average value was 14.3% and 40.9% higher than that of the eat-west tracking mode and north-south tracking mode respectively. Further, form the results of the experiment, it was concluded that the optical efficiency was affected significantly by the elevation angle tracking errors which should be kept below 0.6o. High optical efficiency could be attained by using dual-tracking mode even though the tracking precision of one axis was degraded. The real-time instantaneous thermal efficiency of the collector reached to 0.775%. In addition, the linearity of the normalized efficiency was favorable. The curve of the calculated thermal efficiency agreed well with the normalized instantaneous efficiency curve derived from the experimental data and the maximum difference between them was 10.3%. This type of solar collector should be applied in middle-scale thermal collection systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evarts, Eric R.; Rippard, William H.; Pufall, Matthew R.
In a small fraction of magnetic-tunnel-junction-based magnetic random-access memory devices with in-plane free layers, the write-error rates (WERs) are higher than expected on the basis of the macrospin or quasi-uniform magnetization reversal models. In devices with increased WERs, the product of effective resistance and area, tunneling magnetoresistance, and coercivity do not deviate from typical device properties. However, the field-swept, spin-torque, ferromagnetic resonance (FS-ST-FMR) spectra with an applied DC bias current deviate significantly for such devices. With a DC bias of 300 mV (producing 9.9 × 10{sup 6} A/cm{sup 2}) or greater, these anomalous devices show an increase in the fraction of the power presentmore » in FS-ST-FMR modes corresponding to higher-order excitations of the free-layer magnetization. As much as 70% of the power is contained in higher-order modes compared to ≈20% in typical devices. Additionally, a shift in the uniform-mode resonant field that is correlated with the magnitude of the WER anomaly is detected at DC biases greater than 300 mV. These differences in the anomalous devices indicate a change in the micromagnetic resonant mode structure at high applied bias.« less